cs498dm software testing darko marinov january 17, 2012
TRANSCRIPT
cs498dmSoftware Testing
Darko Marinov
January 17, 2012
Teaching Staff
• Instructor: Darko Marinov– Email: marinov AT illinois.edu– Office: SC 3116, 217-265-6117– Office hours: after classes or by appointment
• No TA
Course Overview
• Introduction to software testing– Systematic, organized approaches to testing– Based on models and coverage criteria– Testing is not (only) about finding “bugs”– Improve your testing (and development) skills
• Five problem sets and a project– Centered around testing Java PathFinder (JPF)
JPF in One Slide
• Will have an entire lecture on JPF– You’ll get to learn a lot about Java/JVM/JPF in
this class
• JPF is a tool for systematic testing of Java programs
• JPF is a Java Virtual Machine (JVM) that has support for fast backtracking
• JPF is implemented in Java itself
Administrative Info
• Lectures: TR 12:30pm-1:45pm, 1103 SC
• Credit:– 3 undergraduate hours– 3 or 4 graduate hours (a larger project for 4)
• Prerequisites: recommended software engineering (cs427) and programming languages (cs225, cs421)– Consent of instructor (must if not senior)
Grading
• Points– Project (25%)– Problem sets (5*15%)
• Grades– A*(90%), B*(80%), C*(70%), D*(60%), F(<60%)– For more details, see the syllabus– The instructor may lower the point limits
Project
• Testing a part of JPF
• Deliverables– Proposal (due in three weeks)– Progress report (around midterm)– Final report (by the grade submission deadline)– Bug reports (hopefully you’ll find some bugs)
• Extra bonus points for reporting bugs to me
Collaboration
• You must individually write solutions for the problem sets
• You can collaborate on everything else (unless explicitly stated not to collaborate!)– Discuss problem sets– Do projects in groups, preferably two or three
students
• Testing is a social activity– Communication matters
Course Communication
• Wikihttps://wiki.engr.illinois.edu/display/cs498dmsp12
• Mailing list cs498dm AT cs.illinois.edu
• Instructor: Darko– Email: marinov AT illinois.edu– Office: SC 3116, 217-265-6117– Office hours: after classes or by appointment
Signup Sheet
• Name
• Email address ([email protected])
• Program/Year
• Interests: what would you like to learn about testing?
• Experience: what testing did you do?
Textbook
• “Introduction to Software Testing”by Paul Ammann and Jeff OffuttCambridge University Press, Jan. 2008
• Strongly recommended but not required– Books should be in the bookstore already
This Lecture: Introduction to “Bugs”
• Why look for bugs?
• What are bugs?
• Where they come from?
• How to detect them?
Some Costly “Bugs”
• NASA Mars space missions– Priority inversion (2004)– Different metric systems (1999)
• BMW airbag problems (1999)– Recall of 15,000+ cars
• Ariane 5 crash (1996)– Uncaught exception of numerical overflow– http://www.youtube.com/watch?v=kYUrqdUyEpI
• Your own favorite examples?
Some “Bugging” Bugs
• Smaller issues that give unexpected results
• Your own favorite examples?
Economic Impact
• “The Economic Impact of Inadequate Infrastructure for Software Testing”NIST Report, May 2002
• $59.5B annual cost of inadequate software testing infrastructure
• $22.2B annual potential cost reduction from feasible infrastructure improvements
Estimates
• Extrapolated from two studies (5% of total)– Manufacturing: transportation equipment– Services: financial institutions
• Number of simplifying assumptions
• “…should be considered approximations”
• What is important to you?– Correctness, performance, functionality
Some Motivation for Testers
• An article from SD Times, a magazine for software development managers:“Improving Software Quality”by Lindsey Vereen (page 34/68)
• A slide from Debra Richardson, a professor at UC Irvine:“Analysis and Testing are Creative”(page 26/48)
Terminology• Anomaly• Bug• Crash• Defect• Error• Failure, fault • Glitch• Hang• Incorrectness• J...
Dynamic vs. Static
• Incorrect (observed) behavior– Failure, fault
• Incorrect (unobserved) state– Error, latent error
• Incorrect lines of code– Fault, error
“Bugs” in IEEE 610.12-1990
• Fault– Incorrect lines of code
• Error– Faults cause incorrect (unobserved) state
• Failure– Errors cause incorrect (observed) behavior
• Not used consistently in literature!
Correctness and Quality
Common (partial) propertiesSegfaults, uncaught exceptionsResource leaksData races, deadlocksStatistics based
Specific propertiesRequirementsSpecification
Traditional Waterfall ModelRequirements
Analysis
DesignChecking
ImplementationUnit Testing
IntegrationSystem Testing
MaintenanceRegression Testing
We will look at general techniques, applicable in several phases of testing
Phases (1)
RequirementsSpecify what the software should do
Analysis: eliminate/reduce ambiguities, inconsistencies, and incompleteness
DesignSpecify how the software should work
Split software into modules, write specifications
Checking: check conformance to requirements, using for example conformance testing
Phases (2)
ImplementationSpecify how the modules work–Unit testing: test each module in isolation
IntegrationSpecify how the modules interact–Integration testing: test module interactions–System testing: test the entire system
MaintenanceEvolve software as requirements change–Regression testing: test changes
Testing Effort
Reported to be >50% of development cost [e.g., Beizer 1990]
Microsoft: 75% time spent testing50% testers who spend all time testing
50% developers who spend half time testing
When to Test
The later a bug is found, the higher the costOrders of magnitude increase in later phases
Also the smaller chance of a proper fix
Old saying: test often, test early
New methodology: test-driven development(write tests even before writing code)
Software is Complex
Malleable
Intangible
Abstract
Solves complex problems
Interacts with other software and hardware
Not continuous
Software Still Buggy
Folklore: 1-10 (residual) faults per 1000 nbnc lines of code (after testing)
Consensus: total correctness impossibleto achieve for complex softwareRisk-driven finding/elimination of faults
Focus on specific correctness properties
Approaches to Detecting Bugs
Software testing
Model checking
(Static) program analysis
…
Software Testing
Dynamic approach
Run code for some inputs, check outputs
Checks correctness for some executions
Main questionsTest-suite adequacy (coverage criteria)
Test-input generation
Test oracles
Other Testing Questions
Selection
Minimization
Prioritization
Augmentation
Evaluation
Fault Characterization
…
Testing is not (only) about finding faults!
Current Status
Testing remains the most widely used approach to finding bugsValidation: are we building the right system?
Verification: are we building the system right?
Testing is gaining importance with test-first development and increased reliability needs
A lot of research on testing (part of mine too)This course is not about research
“Schools” of Software Testing
Bret Pettichord described four schoolsAnalytic (a branch of CS/Mathematics)
Factory (a managed process)
Quality (a branch of quality assurance)
Context-Driven (a branch of development)
This course focuses on artifacts, not process
Do you want a guest speaker from industry?
Topics Related to “Finding Bugs”
How to “eliminate bugs” (localize faults)?Debugging
How to “prevent bugs”?Programming language design
Software development processes
How to “show absence of bugs”?Theorem proving
Model checking, program analysis
Testing Topics to Cover
Test coverage and adequacy criteriaGraph, logic, input domains, syntax-based
Test-input generation
Test oracles
Model-based testing
Testing software with structural inputs
Test automation
Testing in your domain of interest?
Summary of the Introduction
Eliminate bugs to save lives and money
“Bugs” may mean faults, errors, failures
Several approaches for detection: software testing, model checking, static analysis…
Software testing is the most widely used approach for validation and verificationWe will cover systematic approaches to testing, based on coverage criteria for various models
Testing is not (only) about revealing faults