conformance test experiments for distributed real-time systems
Post on 17-Mar-2016
54 Views
Preview:
DESCRIPTION
TRANSCRIPT
Conformance Test Experiments for Distributed Real-Time SystemsRachel Cardell-Oliver
Complex Systems GroupDepartment of Computer Science & Software EngineeringThe University of Western Australia
July 2002
Talk OverviewResearch Goal:
to build correct distributed real-time systems
1. Distributed Real-Time Systems2. Correctness: Formal Methods & Testing3. Experiments: A New Test Method
1. Distributed Real-Time Systems
Real Time Reactions
Distributed
System Characteristics React or interact with their environment Must respond to events within fixed time Distributed over two or more processors Fixed network topology Each processor runs a set of tasks Processors embedded in other systems Built with limited HW & SW resources
Testing Issues for these Systems Many sources of non-determinism
2 or more processors with independent clocks Set of tasks scheduled on each processor Independent but concurrent subsystems Inputs from uncontrolled environment e.g. people Limited resources affects test control e.g. speed
Our goal: to develop robust test specification and execution methods
2. Correctness: Formal Methods & Testing
Goal: Building Correct Systems
Design (intended behaviour)
Implementation
behaves like this?
Software Tests
are experiments designed to answer the question “does this implementation behave as intended?”
Defect tests are tests which try to try to force the implementation NOT to behave as intended
Our focus is to specify and execute robust defect tests
Related Work on Test Case Generation
Chow TSE 1978 deterministic Mealy FSM Clarke & Lee 1997 timed requirements
graphs Neilsen TACAS 2000 event recording automata Cardell-Oliver FACJ 2000 Uppaal timed automata
Specific experiments are described by a test case: a timed sequence of inputs and outputs
Non-determinism is not handled well (if at all) Not robust enough for our purposes
3. Experiments:A New Test Method
Our Method for Defect Testing
1. Identify types of behaviour which are likely to uncover implementation defects (e.g. extreme cases)
2. Describe these behaviours using a formal specification language
3. Translate the formal test specification into a test program to run on a test driver
4. Connect the test driver to the system under test and execute the test program
5. Analyse test results (on-the-fly or off-line)
Example System to Test
Step 1 – Identify interesting behavioursUsually extreme behaviours such as
Inputs at the maximum allowable rate Maximum response time to events
Timely scheduling of tasks
Example Property to Test
Whenever the light level changes from low to high
then the valve starts to openwithin 60csassuming the light level alternates between
high and low every 100cs
Step 2 – choose a formal specification language
which is able to model real-time clocks persistent data concurrency and communication
use Uppaal Timed Automata (UTA)
Example UTA for timely response
m:=0
m:=0
Writing Robust Tests with UTA Test cases specify all valid test inputs
no need to test outside these bounds Test cases specify all expected test outputs
if an output doesn’t match then it’s wrong No need to model the implementation
explicitly Test cases may be concurrent programs Test cases are executed multiple times
Step 3. Translate Spec to Exec UTA specs are already program-like Identify test inputs and how they will be
controlled by the driver Identify test outputs and how they will be
observed by the driver then straightforward translation into NQC
(not quite C) programs
Example NQC for timely responsetask dolightinput() {while (i<=MAXRUNS) {
Wait(100);setlighthigh(OUT_C); setlighthigh(OUT_A); record(FastTimer(0),HIGH-LIGHT); i++;Wait(100);setlightlow(OUT_C); setlightlow(OUT_A); record(FastTimer(0),LOW-LIGHT); i++;
}// end while}// end task
task monitormessages() {while (i<=MAXRUNS) {monitor (EVENT_MASK(1)) {
Wait(LONGINTERVAL);}catch (EVENT_MASK(1)) {
record(FastTimer(0), Message());
i++;ClearMessage();
}} // end while} // end task
Step 4 –test driver
Step 4 - connect tester and execute tests
Step 5: Analyse Results
0
10
20
30
40
50
60
70
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101
105
109
113
117
121
125
129
Test Point Number
Res
pons
e Ti
me
of V
alve
to H
igh
Ligh
t (cs
)
0
1
2
3
4
5
6
7
8
6 8 34 43 61 62 75 84 91 109 122 124 142 155 164 175 181 187 199 203 207 218 240 248
Task Completion Time (cs)
Task
Iden
tifie
r
p1,s1 p1,a p1,s2 p1,d p2,vm p2,cm p2,cc
Scheduling Deadlines Test Results
Concluding Observations Defect testing requires active test drivers
able to control extreme inputs and observe relevant outputs
Test generation methods must take into account the constraints of executing test cases Robust to non-determinism in the SUT Measure what can be measured
Engineers must design for testability
Results 1:Observation Issues
Things you can’t seeProbe effectClock skewTester speed
Things you can’t see
Motor outputs can’t be observed directly because of power drain so we used IR messages to signal motor
changes But we can observe
touch & light sensors via piggybacked wires broadcast IR messages
The probe effect
We can instrument program code to observe program variables
but the time taken to record results disturbs the timing of the system under test
Solutions observe only externally visible outputs design for testability: allow for probe effects
Clock Skew
Clocks may differ for local results from two or more processors
Solutions: user observations timed only by the tester including tester events gives a partial order
Tester speed
Tester must be sufficiently fast to observe and record all interesting events
Beware scheduling and monitoring overheads execution time variability
Solution: use NQC parallel tasks and off-line analysis for speed
Results 2:Input Control Issues
Input value controlInput timing control
Input Values can be Controlled
Touch sensor input (0..1) good by piggybacked wire
Light sensor input (0..100) OK by piggybacked wire
Broadcast IR Messages good from tester
Also use inputs directly from the env. natural light or button pushed by hand
Input Timing Control is hard to control
Can’t control input timing precisely e.g. offered just before SUT task is called
Solution: Run tests multiple times and analyse average and spread of results
Can’t predict all system timings for a fully accurate model c.f. WCET research, but our problem is harder
Conclusions from Experiments Defect testing requires active test drivers
able to control extreme inputs and observe relevant outputs
Test generation methods must take into account the constraints of executing test cases Robust to non-determinism in the SUT Measure what can be measured
Engineers must design for testability
top related