Documentation
1. Context & Methodology
Figure 1 sums up the methodology presented in this article and show the different
parts of the tool. The tool can be divided in four parts:- An interpreter that parse the use cases descriptions and build an execution model
- A simulator that allow the user to simulate step by step the system
- A test objective generator that browse the execution model to generate test objectives that satisfy a given criterion
- A mapper that maps test
objectives on executable test cases
Figure
1 - Methodology
This document is organized follows :- Description of the input format of the tool.
- Using the simulator.
- The available test criteria.
- Exprorting test cases.
- Downloading the tool.
2. The input format of the toolÂ
All along this document we use a simple example to illustrate how to
use the UCTSystem tool. The example is a sub-system of a book library
system. The UML use case diagram of this sub-system is given on figure
2. The idea is, using the documentation note associated to each use case
to add contracts to each use case. Figure
2 - Simple library use case diagram
On each use case, a precondition express the properties the system must
have in order to execute properly a use case. The post condition express
the effect of the use case execution on the system state. To write such
contracts on each use cases, the parameters of each use cases must be
identified. For instance, the AddBook
use case may have two parameters: the actor performing the operation and
the book to add. The contracts for each use cases are the quite easy to
write using those parameters in boolean expressions.The input file is divided into 3 sections :
- Entities of the system. The entities of the system are the "global variables of the system", each has an name and a type.
- Initial state of the system. i.e. a set of predicates on the entities that are initialy true.
- Use cases and their contracts.
# THE LIBRARY SYSTEM |
Figure
3 - Description of the library system using the UCTSystem format
BOOLEXPR -> DISJONCTION |
Figure
4 - Boolean expression grammar
3. Using the simulator
   Once the input file is correctly written, the simulator allows to ensure that the system's behaviour is correct. To ensure that the contracts correctly describe the system, the simulator mainly provide tree features: step by step execution, checking invariant properties on the system and finding path to a property on the system's state.   Figure 5 present a screenshoot of the simulator executing the library system. On the left side the "history" box lists the executed use cases from the initial state of the system. On the right, the use cases that are executable from the current system's state are detailed with their effective parameters. Finally, on the bottom the current system state is represented in term of valid predicates.
   Figure 6 shows the "Search path" feature. This fonctionality is usefull to put the system in a particular state. On the example of figure 6, we want to find a sequence of use cases so that each customer has borowed a book. The result is presented in the "History" box of the simulator.
   Figure 7 shows how invariant properties can be checked on the contract system. As an example, on the library system, we wanted to ensure that no customer can borrow more than on book at the time.
   The UCTSystem tool also allow to export the graph of the system states and actions using the "dot" format using the "Exhaustive graph" button.
Figure
5 - Screenshoot of the simulator
Figure
6 - The "Search path" fonctionality
Figure 7 - Checking invariant properties on the system
Figure 7 - Checking invariant properties on the system
4. Test criterion
   From the execution model, i.e. all valid sequences of use cases, the idea is to select a set of valid sequences as test objectives. As those test objectives correspond to functional behaviours of the system under test, they lead to functional testing of the system. On the other hand, building test objectives that contain an invalid action regarding the execution model lead to robustness testing (i.e. testing the reaction of the system in case of an invalid action). In both robustness and functional case, the main problem is to find a criterion that conduces to a good behavioural coverage of the system and a reasonable number of test objectives. Indeed, for functional testing, covering all paths in the execution model wouldn't be realistic regarding the number of test objectives. To deal with this, in this article we studied various criteria for functional testing and one for robustness testing. Figure 8 presents a screenshoot of the UCTSystem test generation interface.Figure
8 - The test objectives generation interface
   In the following we detail the criteria implemented by the tool. The two first are classical structural criteria for graph coverage, the next two are adapted to our particular problem and the last two are original ones that use both the execution model graph and the contracts themselves.
- All edges: cover at least once each edge of the execution model graph. This criterion obviously well covers the system behaviours but lead to very large test sets.
- All vertices: cover at leas once each vertice of the the execution model graph. This criterion lead to large test suites and does not ensure an high behavioural coverage of the system.
- AllInstanceOfUseCase:
consist in covering at least once each use cases with all its possible
effictive parameters. This criterion was shown as efficient for our
studies.
- AllVerticesAndAllUCInstances: This is simply the union of the two criteria previously presented. It was not more efficient in our studies then AllInstancesOfUseCase but lead to larger test sets.
- AllPrecondition: This criterion consists in executing each use cases by validating its precondition by all possible "ways". For instance, if the precondition of a use case U is a or b, three "ways" must be explored: executing U with a and not b, executing U with not a and b and finally executing U with a and b. In practice an algorithm computes all those expressions from the use cases preconditions and then search a path in the execution model to satisfy each of them. This criterion ensure good behavioral coverage and generate quite small test sets.
- Robustness: The robustness criterion is base on the same idea then the AllPrecondition one, but lead to execute each use cases wrongly by violating its precondition by all possible "ways".