print · source · login   

Usage

Prerequisite :

Before proceeding with this tutorial, make sure you have installed both Tomte and the Sut tool.
Both tools are compatible with Windows, Linux and MacOs.
Check the corresponding install pages for Tomte and the Sut tool respectively.

Three steps :

Step 1: run the SUT

You can use any SUT which can communicate over tcp/ip network sockets.
To demonstrate Tomte we use the Sut tool. With the Sut tool you can easily generate an implementation of an SUT from an EFSM model specified in Uppaal.

The Tomte tool comes with some predefined models in the tomte-0.41/models/ subdirectory.
If you want to learn your own model, please study the Making SUT section.

For example let us use the BiometricPassport uppaal model as teacher model in our learning experiment and run it on the fly as an SUT using the command:

    > sut_run tomte-0.41/models/BiometricPassport/model.xml --port 9999

    SUT simulation socketserver
    -> listening at port : 9999
    -> verbose mode : OFF
    -> the server has a timeout of 30 seconds
    note: to prevent unnecessary servers to keep on running

Now the SUT is running and waiting for anyone to communicate with it!

For more details about the Sut tool see the Sut tool usage page.

Step 2: learn the SUT with Tomte

Run the 'tomte_learn' command with as argument a configuration file which specifies learning parameters used.
For a quick run of tomte, you can opt to learn one of models in the models directory.
Assuming we want Tomte to learn the BiometricPassport system, as run in the previous section,
we use the command:

    > tomte_learn tomte-0.41/models/BiometricPassport/config.yaml --port 9999
    ...

Similarly, we can test Tomte on any of the models provided in the models directory. Just run the Sut tool
on the model's model.xml specification, then launch Tomte using the model's config.yaml file.

When running without arguments 'tomte_learn' will show you its usage message:

    > tomte_learn
    tomte_learn: error: too few arguments
    usage: tomte_learn [-h] [--seed SEED] [--max-memory MAX_MEMORY]
                       [--output-dir OUTPUT_DIR] [--port PORT] [-e]
                       [--config-option CONFIG_OPTION]
                       configfile
    Try 'tomte_learn --help' for more information.

When running with the -h commandline option it will show you a more detailed help message:

    > tomte_learn -h
    usage: tomte_learn [-h] [--seed SEED] [--max-memory MAX_MEMORY]
                       [--output-dir OUTPUT_DIR] [--port PORT] [-e]
                       [--config-option CONFIG_OPTION]
                       configfile

    positional arguments:
      configfile            learner configuration file

    optional arguments:
      -h, --help            show this help message and exit
      --seed SEED           Seed to use for random number generator.
      --max-memory MAX_MEMORY
                            Maximum memory to use by virtual machine in GB
                            (default:2GB)
      --output-dir OUTPUT_DIR
                            Directory to store output.
      --port PORT           tcp port to listen on for incoming connections
      --config-option CONFIG_OPTION
                            Overrule a configuration option in config.yaml file.

Before we can run Tomte, we must first make a configuration file, which specifies
learning parameters as well as the system to be learned. There are many configurable
parameters, we will only touch on the most important. Below is a simple configuration file:

    learning:
        sutinfoFile: "sutinfo.yaml"
        seed: 1299777356020

    testing:
        minValue: 256
        maxValue: 511
        minTraceLength: 100
        maxTraceLength: 100
        maxNumTraces: 1000
        resetProbability: 0.05

The learning section specifies the parameters that are needed for generating an hypothesis during learning.
Explanation of the learning parameters :

  • sutInfoFile: for learning we need to know the alphabet of the SUT, which is specified in this file. For convenience the name of the SUT is also specified here.
  • seed: we use pseudo randomness during learning using this seed. Consequently, experiments run on the same seed have the same outcome.

The testing section specifies the parameters needed for testing an hypothesis against the SUT. In Tomte we use random walk testing approach,
and the parameters here specify the kind and number of test traces we use to test the hypothesis.

  • minValue and maxValue: specify the range from which input values are chosen from during testing.
  • minTraceLength and maxTraceLength: specify the size for the test traces during testing. Currently only maxTraceLength is used during testing.
  • resetProbability: a reset probability of 0.05 means that after each randomly drawn test input you get 5 percent change to end the test trace earlier. Using this parameter cause a variation in test traces size.

The sutinfoFile can also be generated from the teacher model using the sut_uppaal2sutinfo command. For the BiometricPassport model as teacher in our learning experiment the configuration file ( sutinfo.yaml) can be generated as:

    > cd tomte-0.41/models/BiometricPassport/
    > sut_uppaal2sutinfo model.xml sutinfo.yaml

The sutinfo.yaml file contents is:

    constants:
    - 257
    - 258
    - 259
    inputInterfaces:
     IAA: []
     ICA: []
     ICompleteBAC: []
     IFailBAC: []
     IFailEAC: []
     IGetChallenge: []
     IReadFile:
     - Integer
     IReset: []
     ITA: []
    name: BiometricPassport
    outputInterfaces:
     ONOK: []
     OOK: []

The sutinfo.yaml file describes some information about the SUT you are allowed to know :

  • input interfaces of SUT
  • output interfaces of SUT
  • name of SUT

For more details please study the Making SUT section.

With our config.yaml and sutinfo.yaml files ready we can start learning the sut.
An example run:

    > tomte_learn config.yaml --port 9999
    ...
    ...

That's it. You now learned the model of the SUT!
The results of learning are put in the default output/ directory :

  • the learned model
  • statistics of the learning process

Just try it yourself!!

Notes:

  • Results are saved by default in the folder 'output'
  • When you have the python model pygraphviz install, then you can generate a pdf file of the learned concrete uppaal model with the following command :
   > sut_uppaal2layoutformat learnedConcreteModel.xml learnedConcreteModel.pdf
  • With the --port commandline option you can specify to use a diffent tcp/ip port.
    By default port 7892 is used.
  • All parameters for learning are configured in the config file, however some configuration parameters can also be given as a commandline parameter which then overrules the setting in the configuration file. An example is the --port option we used.
  • If no SUT is listening on the other side ot the network socket then Tomte will quit with an error.

Step 3: check equivalence learned model with teacher model

For equivalence checking the Sut tool and CADP tool set is needed.
The only requirement for the Sut tool to work with the CADP toolset is that its cmdline tools are available through the system PATH.
Setup:

    export CADP=/usr/local/cadp/ 
    export CADP_ARCH=`"$CADP"/com/arch`
    export PATH="$PATH:$CADP/bin.$CADP_ARCH:$CADP/com" 
    export CADP_LANGUAGE=english

To compare two models we then can just use the sut_compare command which internally uses the CADP tool set.

Thus we can do equivalence checking as follows:

    > sut_compare model1.xml model2.xml 0 1
    ...
    TRUE or FALSE with a trace

The equivalence prints TRUE in case of equivalence, otherwise it prints FALSE followed by the trace showing the difference between the models.

The parameters 0 and 1 specify the range used for bisimulation checking. This range is needed when the learned Extended Finite State Machine (EFSM)
is transformed into a LOTOS specification by means of the CADPtool set. We have to limit the range of data parameters in order to check the equivalence in a limited amount of time.

Notes:

  • Also different equivalences can be checked with the CADP toolset. The CADP equivalence type 'strong' is used by default .