testing

65
Software Testing Objectives and principles Techniques Process Object-oriented testing Test workbenches and frameworks

Upload: nazeer-pasha

Post on 13-May-2015

1.230 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: Testing

Software Testing

Objectives and principlesTechniques

ProcessObject-oriented testing

Test workbenches and frameworks

Page 2: Testing

Lecture ObjectivesUnderstand

Software testing objectives and principles

Testing techniques – black-box and white-box

Testing process – unit and integration

Object-oriented testingTest workbenches and frameworks

Page 3: Testing

Can We Exhaustively Test Software?

There are 250 billion unique paths between A and B. If each set of possible data is used, and a single run takes 1 millisecond to execute, it would take 8 years to test all paths.

less than8 cycles

less than8 cycles

less than8 cycles

less than8 cycles

A

B

Page 4: Testing

Can we test all types of software bugs?

Software testing is mainly suitable for dealing with faults that consistently define themselves under well defined conditions

Testers do encounter failures they can’t reproduce. Under seemingly exact conditions, the actions that a test case specifies can

sometimes, but not always, lead to a failure Software engineers sometimes refer to faults with this property as

Mandelbugs (an allusion to Benoit Mandelbrot, a leading researcher in fractal geometry)

Example: the software fault in the Patriot missile defense system responsible for the Scud incident in Dhahran

To project a target’s trajectory, the weapons control computer required its velocity and the time as real values

The system, however, kept time internally as an integer, counting tenths of seconds and storing them in a 24 bit register

The necessary conversion into a real value caused imprecision in the calculated range where a detected target was expected next

For a given velocity of the target, these inaccuracies were proportional to the length of time the system had been continuously running

Page 5: Testing

Testing Objectives Software testing can show the

presence of bugs, but it can never show their absence. Therefore, Testing is the process of exercising a

program with the specific intent of finding errors prior to delivery to the end user.

A good test case is one that has a high probability of finding an error.

A successful test is one that uncovers an error.

Page 6: Testing

Testing Principles All tests should be traceable to customer

requirements Tests should be planned long before testing

begins The Pareto principle applies to software testing Testing should begin “in the small” and

progress toward testing “in the large” Exhaustive testing is not possible To be most effective, testing should be

conducted by an independent third party

Page 7: Testing

Test Case Design Testing must be planned and

performed systematically…not ad hoc or random.

Testing can be performed in two ways:1. Knowing the specified function that a

product has been designed to perform – black-box testing.

2. Knowing the internal workings of the product and testing to ensure all parts are exercised adequately – white-box testing.

Page 8: Testing

Black-box Testing An approach to testing where

the program is considered as a ‘black-box’

The program test cases are based on the system specification

Test planning can begin early in the software process

Page 9: Testing

Equivalence Partitioning

Divide the input domain into classes of data from which test cases can be derived.

Strives to define a test that uncovers classes of errors – reducing total number of test cases required.

Page 10: Testing

Example… Specifications for DBMS state that product

must handle any number of records between 1 and 16,383 (2 14 –1)

If system can handle 34 records and 14,870 records, then probably will work fine for 8,252 records, say.

If system works for any one test case in range (1..16,383), then it will probably work for any other test case in range

Range (1..16,383) constitutes an equivalence class

Any one member is as good a test case as any other member of the class

Page 11: Testing

…Example Range (1..16,383) defines

three different equivalence classes: Equivalence Class 1: Fewer than 1 record

Equivalence Class 2: Between 1 and 16,383 records

Equivalence Class 3: More than 16,383 records

Page 12: Testing

Boundary Value Analysis Technique that leads to selection of test cases that exercise bounding values.

Selecting test case on or just to one side of boundary of equivalence class increases probability of detecting fault.

"Bugs lurk in corners and congregate at boundaries"

Page 13: Testing

DBMS Example Test case 1: 0 records Member of equivalence

class 1 (& adjacent to boundary value)

Test case 2: 1 record Boundary value Test case 3: 2 records Adjacent to boundary value Test case 4: 723 records Member of equivalence

class 2 Test case 5: 16,382 records Adjacent to boundary value Test case 6: 16,383 records Boundary value Test case 7: 16,384 records Member of equivalence

class 3 (& adjacent to boundary value)

Page 14: Testing

White-box Testing Test case design method that uses the

control structure of the procedural design to derive test cases.

Can derive tests that: Guarantee all independent paths have been

exercised at least once Exercise all logical decisions on their true

and false sides Execute all loops at their boundaries and

within operational bounds Exercise internal data structures to ensure

validity

Page 15: Testing

Basis Path Testing Proposed by Tom McCabe. Use cyclomatic complexity

measure as guide for defining a basis set of execution paths.

Test cases derived to exercise the basis set are guaranteed to execute every statement at least once.

Page 16: Testing

CC = 5 So 5 independent paths

1. a, c, f2. a, d, c, f3. a, b, e, f4. a, b, e, a, …5. a, b, e, b, e, …

Independent Paths

a

b c d

e f

Page 17: Testing

The Flowgraph Before the cyclomatic complexity can be

calculated, and the paths determined, the flowgraph must be created.

Done by translating the source code into flowgraph notation:

sequence if while until case

Page 18: Testing

ExamplePROCEDURE average

INTERFACE RETURNS average, total.input, total.valid;INTERFACE ACCEPTS value, minimum, maximum;

TYPE value[1:100] IS SCALAR ARRAY;TYPE average, total.input, total.valid; minimum, maximum, sum IS SCALAR;TYPE i IS INTEGER;

i = 1;total.input = total.valid = 0;sum = 0;DO WHILE value[i] <> -999 AND total.input < 100

increment total.input by 1;

IF value[i] >= minimum AND value[1] <= maximum

THEN increment total.valid by 1; sum = sum + value[i]ELSE skip

ENDIFincrement i by 1;

ENDDOIF total.valid > 0

THEN average = sum/total.valid;ELSE average = -999;

ENDIFEND average

1 2

34

5

6

7

8

910

1112

13

Page 19: Testing

…Example 1

2

3

4

5

6

10

13

12 11

7

8

9

Flowgraph for average

Determine the:1. Cyclomatic complexity

2. Independent paths

Page 20: Testing

Condition Testing Exercises logical conditions

contained within a program module. Types of errors found include;

Boolean operator error (OR, AND, NOT) Boolean variable error Boolean parenthesis error Relational operator error (>,<,=,!=,…) Arithmetic expression error

Page 21: Testing

Loop Testing Focus exclusively on the

validity of loop constructs. 4 types of loop can be

defined: Simple Nested Concatenated Unstructured

Page 22: Testing

Loop Types

Simple

Concatenated Nested

Unstructured

Page 23: Testing

Simple Loops Where n is the max number of

passes, the following test can be applied: Skip loop entirely Only one pass 2 passes m passes (where m<n) n-1, n, n+1 passes

Page 24: Testing

Nested Loops If the approach for simple loops is extended,

number of possible tests would grow geometrically – impractical.

Instead: Start at innermost loop. Set all other loops to

minimum values. Conduct simple loop test for innermost loop while

holding outer loops at minimum loop counter values. Add other test for out-of-range or excluded values.

Work outward, conducting tests for next loop, but keeping all other outer lops at minimum values and other nested loops to ‘typical’ values.

Continue until all loops tested.

Page 25: Testing

Concatenated Loops Test as simple loops

provided each loop is independent.

If two loops are concatenated and loop counter for loop 1 is used as initial value for loop 2, then test as nested loops.

Page 26: Testing

Unstructured LoopsCan’t test unstructured loops effectively.

Reflects very bad practice and should be redesigned.

Page 27: Testing

The Tester Who does the testing?

a) Developerb) Member of

development teamc) SQAd) All of the above

Page 28: Testing

Independent Test Group Strictly speaking, testing should be

performed by an independent group (SQA or 3rd party)

Members of the development team are inclined to be more interested in meeting the rapidly-approaching due-date.

The developer of the code is prone to test “gently”.

Must remember that the objective is to find errors, not to complete test without finding them (because they’re always there!)

Page 29: Testing

Successful Testing The success of testing can be measured

by applying a simple metric:

So as defect removal efficiency approaches 1, process approaches perfection.

0.1

DRE

DefectsErrors

ErrorsDRE

Page 30: Testing

The Testing Process Unit testing

Testing of individual program components Often performed by the component developer Tests often derived from the developer’s

experience! Increased productivity possible with xUnit

framework Integration testing

Testing of groups of components integrated to create a system or sub-system

The responsibility of an independent testing team

Tests are based on a system specification

Page 31: Testing

Testing Phases

Unittesting Unit

testing Integration

testing Integration

testing

Software developer Development team/ SQA/Independent Test Group

Page 32: Testing

Integration Testing Tests complete systems or

subsystems composed of integrated components

Integration testing should be black-box testing with tests derived from the specification

Main difficulty is localizing errors Incremental integration testing

reduces this problem

Page 33: Testing

Incremental Integration Testing

T3

T2

T1

T4

T5

A

B

C

D

T2

T1

T3

T4

A

B

C

T1

T2

T3

A

B

Test sequence1

Test sequence2

Test sequence3

Page 34: Testing

Approaches to Integration Testing

Top-down testing Start with high-level system and

integrate from the top-down replacing individual components by stubs where appropriate

Bottom-up testing Integrate individual components in levels

until the complete system is created In practice, most integration involves

a combination of these strategies

Page 35: Testing

Top-down Testing

Level 2Level 2Level 2Level 2

Level 1 Level 1Testing

sequence

Level 2stubs

Level 3stubs

. . .

Page 36: Testing

Bottom-up Testing

Level NLevel NLevel NLevel NLevel N

Level N–1 Level N–1Level N–1

Testingsequence

Testdrivers

Testdrivers

Page 37: Testing

Which is Best? In bottom-up testing:

Test harnesses must be constructed and this takes time.

Integration errors are found later rather than earlier.

Systems-level design flaws that could require major reconstruction are found last.

There is no visible, working system until the last stage so is harder to demonstrate progress to clients.

Page 38: Testing

Takes place when modules or sub-systems are integrated to create larger systems

Objectives are to detect faults due to interface errors or invalid assumptions about interfaces

Particularly important for object-oriented development as objects are defined by their interfaces

Interface Testing

Page 39: Testing

Interface TestingTestcases

BA

C

Page 40: Testing

Interfaces Types Parameter interfaces

Data passed from one procedure to another Shared memory interfaces

Block of memory is shared between procedures

Procedural interfaces Sub-system encapsulates a set of procedures

to be called by other sub-systems Message passing interfaces

Sub-systems request services from other sub-systems

Page 41: Testing

Interface Errors Interface misuse

A calling component calls another component and makes an error in its use of its interface e.g. parameters in the wrong order

Interface misunderstanding A calling component embeds assumptions about the

behaviour of the called component which are incorrect

Timing errors The called and the calling component operate at

different speeds and out-of-date information is accessed

Page 42: Testing

Interface Testing Guidelines

Design tests so that parameters to a called procedure are at the extreme ends of their ranges

Always test pointer parameters with null pointers

Use stress testing in message passing systems In shared memory systems, vary the order in

which components are activated Design tests which cause the component to

fail

Page 43: Testing

Stress Testing Exercises the system beyond its maximum

design load. Stressing the system often causes defects to

come to light Stressing the system test failure behaviour.

Systems should not fail catastrophically. Stress testing checks for unacceptable loss of service or data

Particularly relevant to distributed systems which can exhibit severe degradation as a network becomes overloaded

Page 44: Testing

The components to be tested are object classes that are instantiated as objects

Larger grain than individual functions so approaches to white-box testing have to be extended

No obvious ‘top’ to the system for top-down integration and testing

Object-Oriented Testing

Page 45: Testing

Testing Levels

Test object classesTest clusters of cooperating objects

Test the complete OO system

Page 46: Testing

Object Class Testing Complete test coverage of a class

involves Testing all operations associated with an

object Setting and interrogating all object

attributes Exercising the object in all possible states

Inheritance makes it more difficult to design object class tests as the information to be tested is not localized

Page 47: Testing

Object Integration Levels of integration are less

distinct in object-oriented systems Cluster testing is concerned with

integrating and testing clusters of cooperating objects

Identify clusters using knowledge of the operation of objects and the system features that are implemented by these clusters

Page 48: Testing

Approaches to Cluster Testing

Use-case or scenario testing Testing is based on a user interactions with the system Has the advantage that it tests system features as

experienced by users Thread testing

A thread consists of all the classes needed to respond to a single external input. Each class is unit tested, and then the thread set is exercised.

Object interaction testing Tests sequences of object interactions that stop when

an object operation does not call on services from another object

Uses-based testing Begins by testing classes that use few or no server

classes. Next, classes that use the first group of classes are tested, followed by classes that use the second group, and so on.

Page 49: Testing

Scenario-Based Testing Identify scenarios from use-

cases and supplement these with interaction diagrams that show the objects involved in the scenario

Consider the scenario in the weather station system where a report is generated

Page 50: Testing

Collect Weather Data:CommsController

request (report)

acknowledge ()report ()

summarise ()

reply (report)

acknowledge ()

send (report)

:WeatherStation :WeatherData

Page 51: Testing

Weather Station Testing Thread of methods executed

CommsController:request WeatherStation:report WeatherData:summarize

Inputs and outputs Input of report request with associated

acknowledge and a final output of a report Can be tested by creating raw data and

ensuring that it is summarized properly Use the same raw data to test the

WeatherData object

Page 52: Testing

OO Testing: Myths & Reality

Inheritance means never having to say your sorry

Reuse means never having to say your sorry

Black box testing is sufficient

Page 53: Testing

Implications of Inheritance Myth:

specializing from tested superclasses means subclasses will be correct

Reality: Subclasses create new ways to

misuse inherited features Different test cases needed for each

context Need to retest inherited methods, even if

unchanged.

Page 54: Testing

Implications of Reuse Myth:

Reusing a tested class means that the behavior of the server object is trustworthy

Reality: Every new usage provides ways to

misuse a server. Even if many server object of a given class

function correctly, nothing is to prevent a new client class from using it incorrectly

we can't automatically trust a server because it performs correctly for one client

Page 55: Testing

Implication of Encapsulation

Myth: White-box testing violates

encapsulation, surely black-box testing (of class interfaces) is sufficient.

Reality: Studies indicate that “thorough” BBT

sometimes exercises only 1/3 of code. BBT exercises all specified behaviors,

what about unspecified behaviors?! Need to examine implementation.

Page 56: Testing

And What About Polymorphism?

Each possible binding of a polymorphic component requires separate test…probably separate test case!

Page 57: Testing

Testing Workbenches Testing is an expensive process

phase. Testing workbenches provide a range of tools to reduce the time required and total testing costs

Most testing workbenches are open systems because testing needs are organization-specific

Difficult to integrate with closed design and analysis workbenches

Page 58: Testing

A Testing Workbench

Dynamicanalyser

Programbeing tested

Testresults

Testpredictions

Filecomparator

Executionreport

Simulator

Sourcecode

Testmanager Test data Oracle

Test datagenerator

Specification

Reportgenerator

Test resultsreport

Page 59: Testing

Workbench Components Test manager: manages the running of program

tests Test data generator: selects test data from database

or uses patterns to generate random data of correct form

Oracle: Predicts expected results (may be previous version/prototype)

Comparator: compare results of oracle and program, or program and previous version (regression test)

Dynamic analyzer: counts number of times each statement is executed during test.

Simulator: simulates environment (target platform, user interaction, etc)

Page 60: Testing

xUnit Framework Developed by Kent Beck Makes object-oriented unit

testing more accessible. Freeware versions available

for most object-oriented languages www.xprogramming.com/software.htm

Page 61: Testing

jUnit – “successful”

Page 62: Testing

jUnit – “unsuccessful”

Page 63: Testing

Simple Guide to Using xUnit

Subclass TestCase class for the object under test

Ensure test class has scope over object under test. Add a test method to the test class for each

method. An xUnit test method is an ordinary method without

parameters. Code the test case in the test method

Creates objects necessary for the test (fixture) (1) Exercises objects in the fixture (2) Verifies the result. (3)

Page 64: Testing

Key PointsExhaustive testing is not possibleTesting must be done systematically using black-box and white-box testing techniques

Testing must be done at both unit and integration levels

Object-oriented programming offers its own challenges for testing

Testing workbenches and frameworks can help with the testing process

Page 65: Testing

ReferencesM. Grottke and K.S. Trivedi. Fighting Bugs: Remove, Retry, Replicate and Rejuvinate. IEEE Computer, February 2007, pp. 107 – 109.

R. Pressman. Software Engineering: A Practitioners Approach, New York, NY: McGraw-Hill, 6th Ed, 2004.

I. Sommerville. Software Engineering, 6th Ed. New York, NY: Addison-Wesley, 2000.