cseb233: fundamentals of software engineeringmetalab.uniten.edu.my/~hazleen/cseb233/vv.pdf ·...
TRANSCRIPT
CSEB233: Fundamentals of Software Engineering
Software Verification and Validation
Objectives
Discuss the fundamental concepts of software verificationand validation
Conduct software testing and determine when to stop
Describe several types of testing: unit testing,
integration testing,
validation testing, and
system testing
Produce standard for software test documentation
Use a set of techniques for the creation of test cases thatmeet overall testing objectives and the testing strategies
Software Verification &
Validation
Fundamental Concepts
Verification & Validation (1)
V & V must be applied at each framework activity inthe software process
Verification refers to the set of tasks that ensure thatsoftware correctly implements a specific function
Validation refers to a different set of tasks that ensurethat the software that has been built is traceable tocustomer requirements
Boehm states this another way: Verification: "Are we building the product right?"
Validation: "Are we building the right product?”
Verification & Validation (2)
V&V have two principal objectives: Discover defects in a system
Assess whether or not the system is useful and useable inan operational situation
V&V should establish confidence that the software isfit for purpose This does NOT mean completely free of defects
Rather, it must be good enough for its intended use and thetype of use will determine the degree of confidence that isneeded
• V & V (SQA) activities include:
Verification & Validation (3)
SQA activities: Technical reviews
Quality and configuration audits
Performance monitoring
Simulation
Feasibility study
Documentation review
Database review
Algorithm analysis
Testing activities:
Development testing
Qualification testing
Acceptance testing
Installation testing
Software Verification &
Validation
Software Testing
Software Testing
The process of exercisinga program with the specificintent of finding errors priorto delivery to the end user
Must be planned carefullyto avoid wastingdevelopment time andresources, and conductedsystematically
What testing shows?
Who Tests the Software? (1)
Developer
Understands system
but, will test “gently”
Driven by “delivery”
Independent Tester
Must learn about the
system,
Will attempt to break it
Driven by quality
Who Tests the Software? (2)
Misconceptions:
The developer should do no testing at all
Software should be “tossed over the wall” to stranger
who will test it mercilessly
Testers are not involved with the project until it is time
for it to be tested
Who Tests the Software? (3)
The developer and Independent Test Group (ITG)
must work together throughout the software project
to ensure that thorough tests will be conducted
An ITG does not have the “conflict of interest” that the
software developer might experience
While testing is conducted, the developer must be
available to correct errors that are uncovered
Testing Strategy (1)
Identifies steps to be undertaken; when these stepsare undertaken; how much effort; time; and resourcesrequired.
Any testing strategy must incorporate: Test planning
Test case design
Test execution
Resultant data collection and evaluation
Should provide guidance for the practitioners and aset of milestones for the manager
Testing Strategy (2)
Characteristics of software testing strategies
proposed in the literature:
To perform effective testing, you should conduct
effective technical reviews.
By doing this, many errors will be eliminated before testing
commences.
Testing begins at the component level and works
“outward” toward the integration of the entire computer-
based system
Testing Strategy (3)
Different testing techniques are appropriate for different
software engineering approaches and at different
points in time.
Testing is conducted by the developer of the software
and (for large projects) an independent test group.
Testing and debugging are different activities, but
debugging must be accommodated in any testing
strategy.
Overall Software Testing Strategy
Maybe viewed in the context of the spiral
Begins by ‘testing-in-the-small’ and move toward
‘testing-in-the-large’
Overall Software Testing Strategy
Unit Testing
focuses on each unit of the software (e.g., component,
module, class) as implemented in source code
Integration Testing
focuses on issues associated with verification and
program construction as components begin interacting
with one another
Overall Software Testing Strategy
Validation Testing
provides assurance that the software validation criteria
(established during requirements analysis) meets all
functional, behavioral, and performance requirements
System Testing
verifies that all system elements mesh properly and
that overall system function and performance has been
achieved
When to Stop Testing?
Testing is potentially endlessWe cannot test until all the defects are unearthed and
removed – which is impossible
At some point, we have to stop testing and ship thesoftware The question is, When?
Realistically, testing is a trade-off between budget,time and quality
It is driven by profit models(Pan, 1999)
When to Stop Testing?
The pessimistic, and unfortunately most often used
approach is to stop testing whenever some, or any
of the allocated resources - time, budget, or test
cases - are exhausted
The optimistic stopping rule is to stop testing when
either reliability meets the requirement, or the
benefit from continuing testing cannot justify the
testing cost
Software Verification &
Validation
Types of Test
Unit Testing
Focuses on assessing: internal processing logic and data structures within the
boundaries of a component (module)
proper information flow of module interfaces
local data to ensure that integrity is maintained
boundary conditions
basis (independent) path
all error handling paths
If resources are scarce to do comprehensive unittesting, select critical or complex modules and unit testthese only
Unit Testing
Integration Testing
After unit testing of individual modules, they arecombined together into a system
Question commonly asked once all modules havebeen unit tested: “If they work individually, why do you doubt that they’ll work
when we put them together?”
The problem is “putting them together” – interfacing Data can be lost across an interface
Global data structures can present problems
Subfunctions, when combined, may not produce thedesired function
Integration Testing
Incremental integration testing strategies:
Bottom-up integration
Top-down integration
Regression testing
Smoke testing
Bottom-up Integration
An approach where the lowest level modules aretested first, then used to facilitate the testing ofhigher level modulesThe process is repeated until the module at the top of
the hierarchy is tested
Top level modules are the most important yet testedlast
Is helpful only when all or most of the modules ofthe same development level are ready
Bottom-up Integration
The steps:
Test D, E individually
Using a dummy program - ‘Driver’
Low-level components are com-bined into clusters that perform aspecific software function. Test C such that it call D/E - If an
error occurs we know that theproblem is in C or in theinterface between C and D/E
The cluster is tested
Drivers are removed and clustersare combined moving upward inthe program structure
Top-down Integration
The steps:
Main/top module used as a test driver and stubs are substitutesfor modules directly subordinate to it.
Subordinate stubs are replaced one at a time with real modules(following the depth-first or breadth-first approach).
Tests are conducted as each module is integrated.
On completion of each set of tests and other stub is replaced witha real module.
Regression testing may be used to ensure that new errors arenot introduced.
The process continues from 2nd step until the entire programstructure is built
Top-down Integration
Example steps:
Test A individually (use stubs forother modules)
Depending on the integrationapproach selected, subordinatestubs are replaced one at a timewith actual components In a ‘depth-first’ structure:
Test A such that it calls B (usestub for other modules) If an error occurs we know that
the problem is in B or in theinterface between A and B
Replace stubs one at a time,‘depth-first’ and re-run tests
Regression Testing (1)
Focuses on retesting after changes are made
Whenever software is corrected, some aspects of the
software configuration is changed
e.g., the program, its documentation, or the data that
support it
Regression testing helps to ensure that changes - due
to testing or for other reasons - do not introduce
unintended behavior or additional errors
Regression Testing (2)
In traditional regression testing, we reuse the same
tests
In risk-oriented regression testing, we test the
same areas as before, but we use different
(increasingly complex) tests
Regression testing may be conducted manually, by
re-executing a subset of all test cases or using
automated capture/playback tools
Smoke Testing (1)
A common approach for creating “daily builds” forproduct software
Software components that have been translated intocode are integrated into a “build”
A build includes all data files, libraries, reusablemodules, and engineered components that arerequired to implement one or more product functions
A series of tests is designed to expose errors that willkeep the build from properly performing its function
Smoke Testing (2)
The intent should be to uncover “show stopper”
errors that have the highest likelihood of throwing
the software project behind schedule
The build is integrated with other builds and the
entire product (in its current form) is smoke tested
daily
The integration approach may be top down or
bottom up
Validation Testing (1)
Focuses on uncovering errors at the software
requirements level.
SRS might contain a ‘Validation Criteria’ that forms
the basis for a validation-testing approach
Validation Testing (2)
Validation-Test Criteria:
all functional requirements are satisfied
all behavior characteristics are achieved
all content is accurate and properly presented
all performance requirements are attained,
documentation is correct, and
usability and other requirements are met
Validation Testing (3)
An important element of the validation process is a
configuration review/audit
Ensure that all elements of the software configuration
have been properly developed, are cataloged, and
have the necessary detail to strengthen the support
activities.
Validation Testing (4)
A series of acceptance tests are conducted to enable
the customer to validate all requirements
To make sure the software works correctly for intended
user in his or her normal work environment
Alpha test
Version of the complete software is tested by customer under the
supervision of the developer at the developer’s site
Beta test
Version of the complete software is tested by customer at his or
her own site without the developer being present
System Testing (1)
A series of different tests to verify that systemelements have been properly integrated andperform allocated functions.
Types of system tests:Recovery Testing
Security Testing
Stress Testing
Performance Testing
Deployment Testing
System Testing (2)
Recovery Testing forces the software to fail in a variety of ways and
verifies that recovery is properly performed
Security Testing verifies that protection mechanisms built into a system
will, in fact, protect it from improper penetration
Stress Testing executes a system in a manner that demands
resources in abnormal quantity, frequency, or volume
System Testing (3)
Performance Testing
test the run-time performance of software within the
context of an integrated system
Deployment Testing
examines all installation procedures and specialized
installation software that will be used by customers
all documentation that will be used to introduce the
software to end users
Software Verification &
Validation
Software Test Documentation
Software Test Documentation (1)
IEEE 829 2008 Stan-dard for Software TestDocumentation
IEEE standard thatspecifies the form of aset of documents foruse in eight definedstages of softwaretesting
The documents are: Test Plan
Test Design Specification
Test Case Specification
Test Procedure Specification
Test Item Transmittal Report
Test Log
Test Incident Report
Test Summary Report
Software Test Documentation (2)
Test Plan - A management planning document thatshows: How the testing will be done
including System Under Test (SUT) configurations.
Who will do it
What will be tested
How long it will take - may vary, depending upon resourceavailability
What the test coverage will be, i.e. what quality level isrequired
Software Test Documentation (3)
Test Design Specification:
detailing test conditions and the expected results as
well as test pass criteria.
Test Procedure Specification:
detailing how to run each test, including any set-up
preconditions and the steps that need to be followed
Software Test Documentation (4)
Test Item Transmittal Report: reporting on when tested software components have
progressed from one stage of testing to the next
Test Log: recording which tests cases were run, who ran them, in
what order, and whether each test passed or failed
Test Incident Report: detailing, for any test that failed, the actual versus expected
result, and other information intended to throw light on whya test has failed.
Software Test Documentation (5)
Test Summary Report: A management report providing any important information
uncovered by the tests accomplished, and includingassessments of the quality of the testing effort, the qualityof the software system under test, and statistics derivedfrom Incident Reports
The report also records what testing was done and howlong it took, in order to improve any future test planning
This final document is used to indicate whether thesoftware system under test is fit for purpose according towhether or not it has met acceptance criteria defined by theproject stakeholders
Software Verification &
Validation
Creating Test Cases
Test-case Design (1)
Focuses on a set of techniques for the creation of testcases that meet overall testing objectives and thetesting strategies
These techniques provide a systematic guidance fordesigning tests that Exercise the internal logic and interfaces of every software
component/module
Exercise the input and output domains of the program touncover errors in program function, behaviour, andperformance
• For conventional application, software is tested from two perspectives:
Test-case Design (2)
White-box’ testing Focus on the program control
structure (internal program logic)
Test cases are derived to ensure that all statements in the program have been executed at least once during testing and all logical conditions have been exercised
Performed early in the testing process
‘Black-box’ testing
Examines some
fundamental aspect of a
system with little regard for
the internal logical
structure of the software
Performed during later
stages of testing
White-box Testing (1)
Using white-box testing method, you may derive test-
cases that:
Guarantee that al independent paths within a module have
been exercised at least once
Exercise all logical decisions on their true and false sides
Execute all loops at their boundaries and within their
operational bounds
Exercise internal data structures to ensure their validity
Example method: basis path testing
White-box Testing (2)
Basis path testing:
Test cases derived to
exercise the basis set
are guaranteed to exe-
cute every statement in
the program at least
once during testing
Deriving Test Cases (1)
Steps to derive the test cases by applying the basispath testing method: Using the design or code, draw a corresponding flow graph.
The flow graph depicts logical control flow using the notationillustrated in next slide.
Refer Figure 18.2 in page 486 - comparison between a flowchartand a flow graph
Calculate the Cyclometic Complexity V(G) of the flow graph
Determine a basis set of independent paths
Prepare test cases that will force execution of each path inthe basis set
Deriving Test Cases (2)
Flow graph notation:
Sequence
IF
WHILE
UNTIL
CASE
void foo (float y, float a *, int n)
{
float x = sin (y) ;
if (x > 0.01)
z = tan (x) ;
else
z = cos (x) ;
for (int i = 0 ; i < x ; + + i)
{
a[i] = a[i] * z ;
Cout < < a [i];
}
}
12
3
5
6
8
7
2 3
1
4
5
8
R1
R2
R3
Predicate
nodes
Predicate
nodes
6
7
Drawing Flow Graph: Example
Deriving Test Cases (3)
The arrows on the flowgraph, called edges orlinks, represent flow ofcontrol and are analogousto flowchart arrows
Area bounded by edgesand nodes are calledregions
When counting regions, weinclude the area outsidethe graph as region
Deriving Test Cases: Example
Step 1: Draw a flow graph
Deriving Test Cases: Example
Step 2: Calculate the Cyclomatic complexity, V(G)
Cyclomatic complexity can be used to count the minimumnumber of independent paths.
A number of industry studies have indicated that the higherV(G), the higher the probability or errors.
The SEI provides the following basic risk assessment basedon the value of code:
Cyclomatic Complexity Risk Evaluation
1 to 10 A simple program, without very much risk
11 to 20 A more complex program, moderate risk
21 to 50 A complex, high risk program
> 50 An un-testable program (very high risk)
Deriving Test Cases: Example
Ways to calculate V(G):
V(G) = the number of regions of the flow graph.
V(G) = E – N + 2 ( Where “E” are edges & “N” are nodes)
V(G) = P + 1 (Where P is the predicate nodes in the flow
graph, each node that contain a condition)
Example:
V(G) = Number of regions = 4
V(G) = E – N + 2 = 14 – 12 + 2 = 4
V(G) = P + 1 = 3 + 1 = 4
Deriving Test Cases: Example 1
Step 3: Determine a basis set of independent paths
Path 1: 1, 2, 3, 4, 5, 6, 7, 8, 12
Path 2: 1, 2, 3, 12
Path 3: 1, 2, 3, 4, 5, 9, 10, 3, …
Path 4: 1, 2, 3, 4, 5, 9, 11, 3, …
Step 4: Prepare test cases
Test cases should be derived so that all of these paths areexecuted
A dynamic program analyser may be used to check thatpaths have been executed
Summary (1)
Software testing plays an extremely important rolein V&V, but many other SQA activities are alsonecessary
Testing must be planned carefully to avoid wastingdevelopment time and resources, and conductedsystematically
The developer and ITG must work togetherthroughout the software project to ensure thatthorough tests will be conducted
Summary (2)
The software testing strategy is to begins by ‘testing-
in-the-small’ and move toward ‘testing-in-the-large’
The IEEE 829.2009 standard specifies a set of
documents for use in eight defined stages of software
testing
The ‘white-box’ and ‘black-box’ techniques provide a
systematic guidance for designing test cases
We need to know when is the right time to stop testing
THE END
Copyright © 2013
College of Information Technology