print · login   

Usage

Three steps :

Step 1: run the SUT

You can use any SUT which can communicate over tcp/ip network sockets.
To demonstrate Tomte we use the Sut tool?. With the Sut tool you can easily generate an implementation of an SUT from an EFSM model specified in Uppaal.

The Tomte tool comes with some predefined models in the tomte-0.3/models/ subdirectory.
If you want to learn your own model, please study the Making SUT? section.

For example let us use the BiometricPassport model as teacher model in our learning experiment and run it on the fly as an SUT using the command:

    > sut_run tomte-0.3/models/BiometricPassport/model.xml --port 9999

    SUT simulation socketserver
    -> listening at port : 9999
    -> verbose mode : OFF
    -> the server has a timeout of 30 seconds
    note: to prevent unnecessary servers to keep on running

Now the SUT is running and waiting for anyone to communicate with it!

For more details about the Sut tool see the Sut tool usage? page.

Step 2: learn the SUT with Tomte

Run the 'tomte_learn' command with as argument a configuration file which specifies all kind of learning parameters.
When running without arguments it will show you its usage message:

    > tomte_learn
    usage: tomte_learn [-h] [--seed SEED] [--output-dir OUTPUT_DIR] [--port PORT] configfile

When running with the -h commandline option it will show you a more detailed help message:

    > tomte_learn -h
    usage: tomte_learn [-h] [--seed SEED] [--output-dir OUTPUT_DIR] [--port PORT] configfile

    positional arguments:
     configfile learner configuration file

    optional arguments:
     -h, --help show this help message and exit
     --seed SEED Seed to use for random number generator.
     --output-dir OUTPUT_DIR Directory to store output.
     --port PORT tcp port to communicate with SUT (default:7892)

Thus before we can run Tomte, we must first make a configuration file, which specifies some parameters for tuning the learning process.

    learning:
        sutinfoFile: "sutinfo.yaml"
        seed: 1299777356020
        minValue: 256
        maxValue: 511
    testing:
         minTraceLength: 100
         maxTraceLength: 150
         maxNumTraces: 1000

The learning section specifies the parameters that are needed for generating an hypothesis during learning.
Explanation of the learning parameters :

  • sutInfoFile: for learning we need to know the alphabet of the SUT, which is specified in this file. For convenience also the name of the SUT is specified here.
  • seed: we use pseudo randomness during learning using this seed.
  • minValue and maxValue: specify the range to choose fresh values from.

The testing section specifies the parameters that are needed for testing an hypothesis for correctness against the SUT. In Tomte we use random testing, and the parameters here specify the kind and number of test traces we use to test the hypothesis.

The sutinfoFile can also be easily generated from the teacher model using the sut_uppaal2sutinfo command. For the BiometricPassport model as teacher in our learning experiment the configuration file ( sutinfo.yaml) can be generated as:

    > cd tomte-0.3/models/BiometricPassport/model.xml
    > sut_uppaal2sutinfo model.xml sutinfo.yaml

The sutinfo.yaml file contents is:

    constants:
    - 257
    - 258
    - 259
    inputInterfaces:
     IAA: []
     ICA: []
     ICompleteBAC: []
     IFailBAC: []
     IFailEAC: []
     IGetChallenge: []
     IReadFile:
     - Integer
     IReset: []
     ITA: []
    name: BiometricPassport
    outputInterfaces:
     ONOK: []
     OOK: []

The sutinfo.yaml file describes some information about the SUT you are allowed to know :

  • input interfaces of SUT
  • output interfaces of SUT
  • name of SUT

For more details please study the Making SUT? section.

With our config.yaml and sutinfo.yaml files ready we can start learning the sut.
An example run:

    > tomte_learn config.yaml --port 9999
    ...
    ...

That' s it. You now learned the model of the SUT!
The results of learning are put in the default output/ directory :

  • the learned model
  • statistics of the learning process

Just try it yourself!! It is real fun!!

Notes:

  • Results are saved by default in the folder 'output'
  • With the --port commandline option you can specify to use a diffent tcp/ip port.
    By default port 7892 is used.
  • All parameters for learning are configured in the config file, however some configuration parameters can also be given as a commandline parameter which then overrules the setting in the configuration file. An example is the --port option we used.
  • If no SUT is listening on the other side ot the network socket then Tomte will quit with an error.

Step 3: check equivalence learned model with teacher model

For equivalence checking the Sut tool? and CADP tool set is needed.
The only requirement for the Sut tool to work with the CADP toolset is that its cmdline tools are available through the system PATH.
Setup:

    CADP=/usr/local/cadp/ ; export CADP
    CADP_ARCH=`"$CADP"/com/arch`
    PATH="$PATH:$CADP/bin.$CADP_ARCH:$CADP/com" ; export PATH
    export CADP_LANGUAGE=english

To compare two models we then can just use the sut_compare command which internally uses the CADP tool set.

Thus we can do equivalence checking as follows:

    > sut_compare model1.xml model2.xml 0 1
    ...
    TRUE or FALSE with a trace

The equivalence prints TRUE in case of equivalence, otherwise it prints FALSE followed by the trace showing the difference between the models.

The parameters 0 and 1 specify the range used for bisimulation checking. This range is needed when the learned Extended Finite State Machine (EFSM) is transformed into a LOTOS specification by means of the CADPtool set. We have to limit the range of data parameters in order to check the equivalence in a limited amount of time.

Notes:

  • Also different equivalences can be checked with the CADP toolset. The CADP equivalence type 'strong' is used by default .