automated generation of context-aware tests
Post on 18-Jan-2016
43 Views
Preview:
DESCRIPTION
TRANSCRIPT
Automated Generation of Context-Aware Tests
by
Zhimin Wang (University of Nebraska–Lincoln)
Sebastian Elbaum (University of Nebraska–Lincoln)
David S. Rosenblum (University College London)
Research funded in part by the EPSRC and the Royal Society
BackgroundUbiquitous Computing Systems
• Context-Aware– Execution driven by changes
in execution environment• Sensed through middleware
invocation of context handlersContext is an additional input space that
must be explored adequately during testing
• Adaptive– Execution must adapt to context changesChanges to multiple context variables
may occur simultaneously
An important emerging class of software systems
Nearby device IDs
LocationRadio signal
strength
Available memory
Battery level
Problem StatementTesting Ubiquitous Systems
Discovering concurrencyfaults in context-awareubiquitous systems
• Failures occur frequentlyduring attempts to handlemultiple context changes
• Existing testing techniques havelimited effectiveness in discoveringthe underlying faults
New SMS Found Wi-Fi
Additional Challenges during Testing
• Hard to control when and how to input contexts– Middleware can introduce noise– Interference can occur between context handlers
• Hard to define a precise oracle– Execution differs under various vectors of context inputs
• Real environment is not available– Too many sensors are required
• Sensed contexts can be inconsistent– Example: The border between rooms at an exhibition
Contributions
1. CAPPs: Context-Aware Program Points• A model of how context affects program execution
2. CAPPs-Based Test Adequacy Criteria• Criteria for evaluating test suite effectiveness• Defined in terms of sets of test drivers
• Sequence of CAPPs to cover
3. CAPPs-Driven Test Suite Enhancement• Automated exploration of variant interleavings of
invocations of context handlers• Schedules interleavings via special instrumentation
Application for Case StudyTourApp Released with the Context Toolkit
PDA
CommunicationMiddleware
DemoWidget 1
RegistrationWidget
Sensor Sensor
RegistrationBooth
Room 1
InterpreterWidget
TourApp
Remote dataconnection
Service …
Visitor
Application
EndWidget
Sensor
Exit Room
DemoWidget 2
Sensor
Room 2
Service
Tag
Application for Case StudyTourApp Released with the Context Toolkit
PDA
CommunicationMiddleware
DemoWidget 1
RegistrationWidget
Sensor Sensor
RegistrationBooth
Room 1
InterpreterWidget
TourApp
Remote dataconnection
Service …
Visitor
Application
EndWidget
Sensor
Exit Room
DemoWidget 2
Sensor
Room 2
Service
Tag• Location:
Registration Room
• Application Response:Pop-Up Registration Form
Application for Case StudyTourApp Released with the Context Toolkit
PDA
CommunicationMiddleware
DemoWidget 1
RegistrationWidget
Sensor Sensor
RegistrationBooth
Room 1
InterpreterWidget
TourApp
Remote dataconnection
Service …
Visitor
Application
EndWidget
Sensor
Exit Room
DemoWidget 2
Sensor
Room 2
Service
Tag• Location:
DemoRoom 1
• Application Response:Display Lecture Information
Application for Case StudyTourApp Released with the Context Toolkit
PDA
CommunicationMiddleware
DemoWidget 1
RegistrationWidget
Sensor Sensor
RegistrationBooth
Room 1
InterpreterWidget
TourApp
Remote dataconnection
Service …
Visitor
Application
EndWidget
Sensor
Exit Room
DemoWidget 2
Sensor
Room 2
Service
Tag• Power:
LowBattery!
• Application Response:Confine Display Updates
Overview of Testing Infrastructure
CAPPs Identifier
Program Instrumentor
Context Manipulator
Context Driver Generator
Context-Aware Program (P)
AnnotatedFlow Graph
Test Suite (T)
Selected ContextAdequacy Criteria
P
Achieved CoverageAnd
Test Case ExtensionFeedback on Coverage
Test Drivers (D)
Test Adequacy CriteriaContext Adequacy (CA)
• Test driver covering at least one CAPP in each type of context handler
• Examples– { capp1, capp2 }
– or { capp3, capp2 }
– or { capp2, capp1 }
11
12
16
13 19
20
21
22
23
Exit
Exit
11
14
16
15 26
27
32
Exit
Exit
28
31
29
30
Handler CFGtype="power"
Handler CFGtype="demo"
capp1
capp2
capp3
capp4
capp5
capp6
Test Adequacy CriteriaSwitch-to-Context Adequacy (StoC-k)
• Set of test drivers covering all possible combinations of k switches between context handlers
• StoC-1 Example:– { capp1, capp2 },
{ capp5, capp3 },
{ capp3, capp3 },
{ capp5, capp5 }
11
12
16
13 19
20
21
22
23
Exit
Exit
11
14
16
15 26
27
32
Exit
Exit
28
31
29
30
Handler CFGtype="power"
Handler CFGtype="demo"
capp1
capp2
capp3
capp4
capp5
capp6
Test Adequacy CriteriaSwitch-with-Preempted-Capp Adequacy(StoC-k-FromCapp)
11
12
16
13 19
20
21
22
23
Exit
Exit
11
14
16
15 26
27
32
Exit
Exit
28
31
29
30
Handler CFGtype="power"
Handler CFGtype="demo"
capp1
capp2
capp3
capp4
capp5
capp6
• Set of test drivers covering all possible combinationsof k switches between context handlers, with each switch exercised at every CAPP
• StoC-1-FromCapp Example:– { capp1, capp2 },
{ capp3, capp2 },
{ capp4, capp5 },
{ capp2, capp3 },
{ capp5, capp1 },
{ capp6, capp3 },
{ capp3, capp3 },
{ capp5, capp5 }
Case Study Design and SettingsTourApp
• 11 KLOC of Java, 4 seeded faults• Test suite of 36 end-to-end test cases• Executing test suite takes 10 minutes• Studied 4 versions:
– originalTourApp: unmodified originalmanipulatedTourApp: instrumented with calls to our
scheduler methods– delayShortTourApp: instrumented with 1–3 seconds
random delays (sleep())– delayLongTourApp: instrumented with 1–10 seconds
random delay (sleep())
Results: CostTimings with manipulatedTourApp
• Summary of study– Execution time increases with more demanding context
coverage criteria as more context scenarios are required
Results: FeasibilityDrivers in manipulatedTourApp
• Summary of study– Some test drivers were not realised in the application
within the set timeouts– Improvements in generation of D may be needed via
better flow-sensitive analysis
Results: EffectivenessPercentage of Contextual Coverageand Fault Detection
• Summary of study– Coverage decreases with more powerful criteria
• But only slightly for manipulatedTourApp
– manipulatedTourApp performs the best, especially with more powerful criteria
Related Work
• Deterministic testing of concurrent programs(Carver & Tai, Taylor et al.)– Concurrency intrinsic to program, not execution environment
• Metamorphic testing for context-aware applications(Tse et al.)– Oracles embodying metamorphic properties must be defined by
tester
• Data flow coverage criteria for context-aware applications(Lu et al.)– Does not support notion of CAPPs or manipulation of test
executions
• Random sleeps for test perturbation (Edelstein et al.)– Inferior to controlled scheduling of context switches
Conclusion
• Defined the CAPPs model of how context changes affect context-aware applications
• Defined test adequacy criteria in terms of this model
• Created an automated technique to guide test executions in a way that systematically explores many interesting context change scenarios
• Demonstrated the superiority of this technique in discovering concurrency faults
Future Work
• Investigate Additional Classes of Faults– User interface ‘faults’– Adaptation priority faults– Memory leaks
• ContextNotifier and TestingEmulator– Emulation infrastructure for testing– (Tools and libraries from vendors are pathetic!)
• Simulation-Driven Testing– Test case execution driven by mobility traces from
simulation runs
top related