fall 2010 - software testing and quality assurance1 the ariane 5 rocket disaster on june 4, 1996 an...

96
Fall 2010 - Software Testing and Quality Assurance 1 The Ariane 5 Rocket Disaster On June 4, 1996 an unmanned Ariane 5 rocket launched by the European Space Agency exploded just forty seconds after its liftoff from Kourou, French Guiana. The rocket was on its first voyage, after a decade of development costing $7 billion. The destroyed rocket and its cargo were valued at $500 million. http://ima.umn.edu/~arnold/disasters/ariane.html

Upload: gary-craig

Post on 14-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Fall 2010 - Software Testing and Quality Assurance 1

The Ariane 5 Rocket Disaster

On June 4, 1996 an unmanned Ariane 5 rocket launched by the European Space Agency exploded just forty seconds after its liftoff from Kourou, French Guiana. The rocket was on its first voyage, after a decade of development costing $7 billion. The destroyed rocket and its cargo were valued at $500 million. http://ima.umn.edu/~arnold/disasters/ariane.html

Fall 2010 - Software Testing and Quality Assurance 2

The Ariane 5 Rocket Disaster

Failure due to unhandled Floating Point exception. 64 bit Floating Point number was converted to a 16 bit

signed Integer

Cost Millions of dollars for loss of mission Multiyear setback to the Ariane program

Fall 2010 - Software Testing and Quality Assurance 3

Software Failure Rate (Ideal)

Failu

re R

ate

Time

Fall 2010 - Software Testing and Quality Assurance 4

Software Failure Rate (real)

idealized curve

change

actual curve

Failurerate

Time

increased failurerate due to side effects

Fall 2010 - Software Testing and Quality Assurance 5

Course Learning Outcomes

Various test processes and continuous quality improvement

Types of errors and fault models

Methods of test generation from requirements

Test generation from FSM models

Combinatorial test generation

Test adequacy assessment using: control flow, data flow, and program

mutations

Test generation, selection, prioritization, and assessment lie at the foundation

of all technical activities involved in software testing.

Software Testing & Quality Assurance

Software Testing:Basic Terminology, concepts, definitions, goals

Fall 2010 - Software Testing and Quality Assurance 7

Chapter Learning Objectives

Errors and testing, software quality Requirements Testing, debugging, verification Test process Test generation strategies Static testing Control-flow graph

Types of testing

Fall 2010 - Software Testing and Quality Assurance 8

Error, Fault, Failure, and Defect

Fall 2010 - Software Testing and Quality Assurance 9

Errors

Errors are a part of our daily life.

They occur wherever humans are involved in making decisions and taking actions.

Humans make errors in their thoughts, actions, and in the products that might result from their actions.

These fundamental facts of human existence make testing an essential activity.

Fall 2010 - Software Testing and Quality Assurance 10

Errors: Examples

Area Error

Hearing Spoken: He has a garage for repairing foreign cars

Heard: He has a garage for repairing falling cars

Medicine Incorrect antibiotic prescribed

Observation/

Analysis

Incorrect LBW decision by empire in a cricket match

Software Operator used: ≠, correct operator: >

Expression used: a ∩ ( b U c), correct expression: (a ∩ b) U c

Speech Spoken: We need a new refrigerator.

Intent: We need a new washing machine.

Writing Written: He is principle at high school.

Intent: He is principal at high school.

Fall 2010 - Software Testing and Quality Assurance 11

Error, Fault, Failure

IEEE defines the following terms related to defects:

Failure: the inability of a system or component to perform its required functions within specified performance requirements. Behavioral deviation from the user requirement

Fault: an incorrect step, process, or data definition in a computer program. Underlying condition within a software that causes certain failure(s) to

occur

Error: a human action that produces an incorrect result. Missing or incorrect human action resulting in certain fault(s) being

injected into a software Error sources: conceptual models, misconception, misunderstanding,

etc.

Fall 2010 - Software Testing and Quality Assurance 12

Error, Fault, Failure

Failures, faults,

and errors are

collectively referred

to as defects

Errors Faults Failures

Fall 2010 - Software Testing and Quality Assurance 13

Software Quality

Fall 2010 - Software Testing and Quality Assurance 14

Software Quality

Static quality attributes: structured, maintainable, testable code as well as the availability of correct and complete documentation.

Dynamic quality attributes: software reliability, correctness, completeness, consistency, usability, and performance

Fall 2010 - Software Testing and Quality Assurance 15

Dynamic Quality Attributes (contd.)

Completeness refers to the availability of all features listed in the requirements, or in the user manual. An incomplete software is one that does not fully implement all features required.

Consistency refers to adherence to a common set of conventions and assumptions. For example, all buttons in the user interface might follow a common color coding convention. An example of inconsistency would be when a database application displays the date of birth of a person in the database.

Fall 2010 - Software Testing and Quality Assurance 16

Dynamic Quality Attributes

Usability refers to the ease with which an application can be used. This is an area in itself and there exist techniques for usability testing. Psychology plays an important role in the design of techniques for usability testing.

Performance refers to the time the application takes to perform a requested task. It is considered as a non-functional requirement. It is specified in terms such as ``This task must be performed at the rate of X units of activity in one second on a machine running at speed Y, having Z gigabytes of memory."

Fall 2010 - Software Testing and Quality Assurance 17

Software Reliability

Two Definition:

Software Reliability [ANSI/IEEE Std 729-1983]: is the probability of failure free operation of software over a given time interval and under given conditions.

Software Reliability is the probability of failure free operation of software in its intended environment.

Fall 2010 - Software Testing and Quality Assurance 18

Requirements, Input Domain, Behavior,

and Correctness

Fall 2010 - Software Testing and Quality Assurance 19

Requirements

Requirements leading to two different programs:

Requirement 1: It is required to write a program that inputs two integers and outputs the maximum of these.

Requirement 2: It is required to write a program that inputs a sequence of integers and outputs the sorted version of this sequence.

Fall 2010 - Software Testing and Quality Assurance 20

Requirements: Incompleteness

Suppose that program max is developed to satisfy Requirement 1. The expected output of max when the input integers are 13 and 19 can be easily determined to be 19.

Suppose now that the tester wants to know if the two integers are to be input to the program on one line followed by a carriage return, or on two separate lines with a carriage return typed in after each number. The requirement as stated above fails to provide an answer to this question.

Fall 2010 - Software Testing and Quality Assurance 21

Requirements: Ambiguity

Requirement 2 is ambiguous. It is not clear whether the input sequence is to sorted in ascending or in descending order. The behavior of sort program, written to satisfy this requirement, will depend on the decision taken by the programmer while writing sort.

Fall 2010 - Software Testing and Quality Assurance 22

Input domain (Input space)

The set of all possible inputs to a program P is known as the input domain or input space, of P.

Using Requirement 1 above we find the input domain of max to be the set of all pairs of integers where each element in the pair integers is in the range -32,768 till 32,767.

Using Requirement 2 it is not possible to find the input domain for the sort program. Why?

Fall 2010 - Software Testing and Quality Assurance 23

Input domain

Modified Requirement 2: It is required to write a program that inputs a sequence of integers and outputs the integers in the sequence sorted in either ascending or descending order. The order of the output sequence is determined by an input request character which should be “A” when an ascending sequence is desired, and “D” otherwise. While providing input to the program, the request character is input first followed by the sequence of integers to be sorted; the sequence is terminated with a period.

Fall 2010 - Software Testing and Quality Assurance 24

Input domain (Continued)

Based on the above modified requirement, the input domain for sort is a set of pairs. The first element of the pair is a character. The second element of the pair is a sequence of zero or more integers ending with a period.

< A – 3 15 12 55 . >< D 23 78 . >< A . >

Fall 2010 - Software Testing and Quality Assurance 25

Valid And Invalid Inputs

The modified requirement for sort mentions that the request characters can be “A” and “D”, but fails to answer the question “What if the user types a different character ? ”

When using sort it is certainly possible for the user to type acharacter other than “A” and “D”. Any character other than “A” and “D” is considered as invalid input to sort. The requirement for sort does not specify what action it should take when an invalid input is encountered.

Fall 2010 - Software Testing and Quality Assurance 26

Correctness Vs. Reliability

Fall 2010 - Software Testing and Quality Assurance 27

Correctness and Testing

Though correctness of a program is desirable, it is almost never the objective of testing.

To establish correctness via testing would imply testing a program on all elements in the input domain. In most cases that are encountered in practice, this is impossible to accomplish.

Thus correctness is established via mathematical proofs of programs.

Fall 2010 - Software Testing and Quality Assurance 28

Correctness and Testing

While correctness attempts to establish that the program is error free, testing attempts to find if there are any errors in it.

Thus completeness of testing does not necessarily demonstrate that a program is error free.

Testing, debugging, and the error removal processes together increase our confidence in the correct functioning of the program under test.

Ex. 1.8

Fall 2010 - Software Testing and Quality Assurance 29

Reliability and Testing

While measuring correctness, a program can be either correct (1) or incorrect (0).

The probability of a program failure is captured more formally in the term reliability. “the reliability of a program P is the probability of its successful execution on a randomly selected element from its input domain”.

Reliability can be anywhere between 0 and 1.

Example 1.9:

Fall 2010 - Software Testing and Quality Assurance 30

Operational Profile

An operational profile is a numerical description of how a program is used in the field.

Consider a sort program which, on any given execution, allows any one of two types of input sequences. Sample operational profiles for sort follow.

Fall 2010 - Software Testing and Quality Assurance 31

Testing and Debugging

Fall 2010 - Software Testing and Quality Assurance 32

Testing and debugging

Testing is the process of determining if a program has any errors. Most important and commonly performed QA activity.

When testing reveals an error, the process used to determine the cause of this error and to remove it, is known as debugging.

Testing and debugging are often used as two related activities in a cyclic manner.

Fall 2010 - Software Testing and Quality Assurance 33

A Test/Debug Cycle

Fall 2010 - Software Testing and Quality Assurance 34

Test plan

A test cycle is often guided by a test plan. Example (a sample test plan): The sort program is to be tested to

meet the requirements given earlier. Specifically, the following needs to be done. Execute sort on at least two input sequences, one with “A” and the other

with “D” as request characters. Execute the program on an empty input sequence. Test the program for robustness against erroneous inputs such as “R”

typed in as the request character. All failures of the test program should be recorded in a suitable file

using the Company Failure Report Form.

Fall 2010 - Software Testing and Quality Assurance 35

Test Case/Data

A test case is a pair consisting of test data to be input to the program and the expected output. The test data is a set of values, one for each input variable.

Test cases are derived in response to items in the test plan.

A test set (or test suite) is a collection of zero or more test cases.

Sample test case for sort:

Test data: <“A” 12 -29 32 >Expected output: -29 12 32

Fall 2010 - Software Testing and Quality Assurance 36

Program Behavior

Basic idea during testing involves the execution of software and the observation of its behavior.

“A sequence of program states is termed as program behavior.”

A state vector/diagram can be used to specify program behavior.

In case of failure, the execution record is analyzed to locate and fix the fault(s) – debugging, otherwise, confidence in software that it is more likely to fulfill its designated functions

Fall 2010 - Software Testing and Quality Assurance 37

Program Behavior: Example 1

1 integer x, y, z;

2 input(x, y);

3 if(x<y)

4 {z=y;}

5 else

6 {z=x;}

7 endif

8 output(z);

9 end

Program P1.1

[2 u u u] → [3 3 15 u] → [4 3 15 u] → [5 3 15 15] →[8 3 15 15] → [9 3 15 15]

State Vector: A possible sequence of states that the max program may go through.

Fall 2010 - Software Testing and Quality Assurance 38

Program Behavior: Example 2

Example:consider a menu driven application.

Fall 2010 - Software Testing and Quality Assurance 39

Program Behavior: State Diagram

A state diagram specifies program states and how the program changes its state on an input sequence.

Fall 2010 - Software Testing and Quality Assurance 40

Assessing the Correctness

Assessing the correctness of a Program Behavior is two step:

In the first step one observes the behavior.

In the second step one analyzes the observed behavior to check

if it is correct or not. Both these steps could be quite complex

for large commercial programs.

The entity that performs the task of checking the correctness of the observed behavior is known as an Oracle.

Fall 2010 - Software Testing and Quality Assurance 41

Oracle

An oracle is a mechanism used by software testers and software engineers for determining whether a test has passed or failed

A tester can assumes the role of an oracle— human oracle

Fall 2010 - Software Testing and Quality Assurance 42

Oracle: Programs

For example, one might use a matrix multiplication programto check if a matrix inversion program has produced the correctoutput. In this case, the matrix inversion program inverts a givenmatrix A and generates B as the output matrix.

Oracle can also be a program designed to check the behavior of other programs.

Fall 2010 - Software Testing and Quality Assurance 43

Oracle Construction

Construction of automated oracles, such as the one to check a matrix multiplication program or a sort program, requires the determination of input-output relationship.

In general, the construction of automated oracles is acomplex undertaking.

Fall 2010 - Software Testing and Quality Assurance 44

Oracle Construction: Example

Program HVideo provides data entry and search functionality for keeping track of home videos.

Fall 2010 - Software Testing and Quality Assurance 45

Testing: Definitions, Statements, Goals

Fall 2010 - Software Testing and Quality Assurance 46

Definitions

“Testing is the process of establishing confidence that a program or system does what it is supposed to.” (Hetzel,1973)

“Testing is demonstrating that a system is fit for purpose.” (Evans et al. 1996)

“Testing is the process of executing a program or system with the intent of finding errors.” (Myers, 1979)

“Testing is the process consisting of all life cycle activities concerned with checking software and software-related work products.” (Gelperin and Hetzel, 1988)

Fall 2010 - Software Testing and Quality Assurance 47

Statements

“Program testing can be used to show the presence of bugs, but never to show their absence!” (Dijkstra 1969)

“In most cases ‘what you test in a system is much more important than ‘how much you test" (Craig 2002)

“Prioritize tests so that, when ever you stop testing, you have done the best testing in the time available." (ISEB testing foundation course material 2003)

Fall 2010 - Software Testing and Quality Assurance 48

How many testers does it take to change a light bulb

None. Testers just noticed that the room was dark. Testers

don’t fix the problems, they just find them.

Fall 2010 - Software Testing and Quality Assurance 49

Goals

Goal of Testing is to establish the ground for the acceptance of the software by the customer based on the specification through:

High test coverage Low number of non critical defects. Statements concerning software quality

Testing Goals can vary. Define your mission.

Fall 2010 - Software Testing and Quality Assurance 50

High Test Coverage

Completeness: Ensure all requirements are implemented. Total scope must be tested at least once (of course hopefully successful)

Critical scope: Ensure that critical requirements are implemented and work fine. All high prioritized requirements must be tested successfully

Fall 2010 - Software Testing and Quality Assurance 51

Low Number of Non-Critical Defects

At the end the final version of the application: should have no critical defects any more should have only a small number of tolerable defects

Demand on Testing is therefore to detect as much critical defects as soon as possible. Idea is to fix them during the Testing phase

The acceptance criteria should determine what the customer expects. A contract should contain acceptance criteria concerning severity level and priority level of defects.

Fall 2010 - Software Testing and Quality Assurance 52

Statements Concerning Software Quality

Is it possible to install the software? Is it possible to operate the software, is it compatible? Fulfils the software the expected functionality? Do the interfaces work? Is it possible to use the software optimal (Software-

ergonomics, usability, end user needs) Does the software run steadily, with high performance? Fulfils the software special cultural features (Multilingualism,

English / metric system, weight units)? Is the software safe / secure?

Testing Key Questions

Key Questions?

How to test? refined into three sets of questions basic questions testing technique questions activity/management questions

Basic Questions?

What artifacts are tested? design; code, database system etc

What to test? software’s expected behavior, or correct implementation of Software (e.g., internal units, structures, and

relations among them)?

When to stop testing? coverage-based criterion to ensure detection of certain types of faults,

or usage-based criterion to achieve certain level of software reliability

Testing Technique Questions?

Testing technique questions (based on what-to-test and stopping-criteria): specific technique used? systematic models used?

related model questions (below) integration of techniques for efficiency/effectiveness"?

Testing model questions: underlying structure of the model?

Basic type: Partitions vs. FSM? how are these models used?

Directly or through extensions model extension?

Boundary values testing technique extends partition idea Control Flow Testing (CFT) and Data Flow Testing (DFT) extends

FSMs

Management Questions?

Who performs which specific activities? When can specific activities be performed? Test automation? What about tools? General environment for testing? Product type/segment?

Fall 2010 - Software Testing and Quality Assurance 58

Test Generation Strategies

Fall 2010 - Software Testing and Quality Assurance 59

Aligning test mission, strategies, and tactics

Fall 2010 - Software Testing and Quality Assurance 60

Test Generation SourceAny form of test generation uses a source document. In the most informal of test methods, the source document resides in the mind of the tester who generates tests based on a knowledge of the requirements.

In most commercial environments, the process is a bit more formal. The tests are generated using a mix of formal and informal methods either directly from the requirements document serving as the source. In more advanced test processes, requirements serve as a source for the development of formal models.

one often uses more than one notation to express all requirements and generate tests.

Fall 2010 - Software Testing and Quality Assurance 61

Test Generation Source

Model based: require that a subset of the requirements be modeled using a formal notation (usually graphical). Models: Finite State Machines, Timed automata, Petri net, etc.

Specification based: require that a subset of the requirements be modeled using a formal mathematical notation. Examples: B, Z, and Larch.

Code based: generate tests directly from the code.

Fall 2010 - Software Testing and Quality Assurance 62

Test Generation Strategies (Summary)

Requirements, models, and test generation algorithms.

Generic Testing Process

Activities & Generic Process

Major testing activities: test planning and preparation execution (testing) analysis and follow-up

Link above activities => generic process:

planning-execution-analysis-feedback. exit criteria: internal and external. some (small) process variations

Generic Testing Process

Planning & Preparation

Goal settingInformation gatheringModel constructionTest casesTest procedures

Execution

Analysis & Follow-up

Reliability/Coverage goals

satisfied?

SelectedMeasurements& models

Defecthandling

Measurements

Test case& Procedures

Feedback & adjustments

Set reliability/coverage goals

No Yes

Exit

Entry

Analysis/Modelingresults

Planning and Preparation

Test planning: goal setting based on customers' quality perspectives

and expectations. overall strategy building based on the above and

product/environmental characteristics.

Test preparation: preparing test cases/suites. preparing test procedure.

Execution

General steps in test execution test time (& resources) allocation invoking test identifying system failures (& gathering info. for

follow-up actions)

Key to execution: handling both normal vs. abnormal cases. ensure that failed runs will not block the executions of

other test cases.

Analysis and Follow-up

Analysis of testing results: result checking (as part of execution) further result analyses

defect/reliability/etc. analyses.

Follow-up activities: feedback based on analysis results immediate: defect removal (& re-test) other follow-up (longer term):

decision making (exit testing, etc.) test process improvement, etc.

Fall 2010 - Software Testing and Quality Assurance 69

Types of Software Testing

Functional vs. Structural Testing

Functional testing: tests external functions.

as described by external specifications black-box in nature;

functional mapping: input => output without involving internal knowledge

Structural testing: tests internal implementations.

components and structures. white-box in nature;

“white” here = seeing through internal elements visible.

also called clear/glass box.

White-box Testing (WBT)

Program component/structure knowledge (or implementation details) statement/component checklist path (control flow) testing data (flow) dependency testing

Applicability test in the small/early developer testing : dual role of developer (programmer as well as

testers)

Criterion for stopping mostly coverage goals. occasionally quality/reliability goals.

simplest form is statement coverage

testing using debuggers to test expected behavior

Moez ur Rehman
Two main approaches:Tester testing (testing done by a separate group of testers).Developer testing (testing done by programmers). Developer testing is very important for most agile processes.

Black-box Testing

Input/output behavior specification checklist. testing expected/specified behavior

finite-state machines (FSMs) a white-box technique on specification

functional execution path testing.

Applicability late in testing: e.g., system testing. compatible with OO/Reuse paradigm

Criteria: when to stop traditional: functional coverage usage-based: reliability target

Black-box Testing (Applying Generic Testing Process)

Planning and Preparation Activities identify external functions to test (associated with user expectations) derive input conditions and expected output to form test cases define testing goals:

Quality levels (quality-based goal), or Completion of planned test cases (activity-based goals)

Execution Activities observe the external behavior to ensure orderly execution of all the test cases, and to record execution information for analysis and follow-up activities

Analysis and Follow-up Activities analyze the specific behavior and output with the expected ones (testing oracle

problem) follow-up activity includes: to recreate failure scenarios, to diagnose problems, to

locate failure causes and identify specific faults in software design and code, and to fix them

BBT vs. WBT

WBT BBT

Perspective Internal implementation details External functional behavior

Objects of Testing Small software systems; small units of large software systems

Large software systems; substantial parts of large systems

Timeline Early sub-phases (unit or component testing)

Late sub-phases (e.g. system or acceptance testing)

Defect Focus Reducing internal faults Reducing functional problems by target customers

Defect detection and fixing

Effective in localized problems (e.g., design problems)

Effective in interfaces and interactions problems

Testers Developers themselves Typically by dedicated team of testers; third-party

Black-box and white-box are the two most fundamental test techniques that form the foundation of software testing i.e., all Test generation and assessment techniques (TGAT) fall into either of two main technique.

Fall 2010 - Software Testing and Quality Assurance 75

Classification of Testing Techniques

C1: Source of test generation.

C2: Lifecycle phase in which testing takes place

C3: Goal of a specific testing activity

C4: Characteristics of the artifact under test

C5: Test ProcessEach classifier defines a mapping from a set of features to a set of testing techniques.

Classification of dynamic testing techniques can be based on the following five classifiers:

Fall 2010 - Software Testing and Quality Assurance 76

C1:

So

urc

e o

f te

st g

ener

atio

n

Each mapping is not necessarily one-to-one

Techniques in C1 are

generic one.

Almost impossible to use WBT in isolation.

Fall 2010 - Software Testing and Quality Assurance 77

C2: Lifecycle phase in which testing takes place

Each artifact/work product produced throughout SDLC is subject to testing at different levels of rigor and using different testing techniques.

Tests designed during one test phase are not likely to be used during other test phase.

All B-B and W-B testing techniques are applicable during each life cycle phase.

Fall 2010 - Software Testing and Quality Assurance 78

C3: Goal of specific testing activity

Goal-oriented testing looks for specific types of failures.

Robustness testing checks an application for robustness against invalid inputs.

Stress testing checks for the behavior of application under stress.

Performance testing refers to that phase of testing where application is tested against performance requirements.

Load testing determines if the application continue to perform as required under various load conditions.……

Fall 2010 - Software Testing and Quality Assurance 79

C3: Goal of specific testing activity

Fall 2010 - Software Testing and Quality Assurance 80

C4: Artifact under test

Specialized B-B and W-B test techniques are available for specialized software as well as software constructed in particular programming paradigm

Fall 2010 - Software Testing and Quality Assurance 81

C5: Test Process Models•Test process models integrate software testing into the software development life cycle.•Focus on test process is an important aspect of the management of any software development process.

Test generation and adequacy techniques presented in the course are applicable to all test process models.

Fall 2010 - Software Testing and Quality Assurance 82

C5: Testing in Waterfall Model

defects introduced in the early phases and discovered in the later phases could be costly to correct

Fall 2010 - Software Testing and Quality Assurance 83

C5: Testing in V-Model

V-model explicitly specifies testing activities associated with each phase of development cycle.

Testing activities are carried out in parallel with the development activities.

Fall 2010 - Software Testing and Quality Assurance 84

C5: Spiral Testing

In spiral testing, testing activities are represented as a spiral of activities (like in spiral process model).

Spiral testing refers to a test strategy that can be applied to any incremental software development process.

Sophistication of test activities increases with the stages of an evolving prototype.

Fall 2010 - Software Testing and Quality Assurance 85

C5: Agile Testing

Agile testing promotes the following ideas:(a) Include testing-related activities throughout a development project starting from the requirement phase,(b) Work collaboratively with the customer who specifies requirements in terms of tests,(c) Testers and developers must collaborate with each other rather than serve as adversaries, and(d) Test often and in small chunks.

A test process that is rarely well defined

Fall 2010 - Software Testing and Quality Assurance 86

Example: Testing a Web Service

A Web service that converts a given value of temperature from Fahrenheit scale to Celsius scale.

Web-service testing – C4 classifier Ad hoc method in Black-box testing – C1 classifier Unit testing on Web service – C2 classifier GUI testing – C3 classifier

It should be obvious that simply using one classifier to describe a test technique might not provide sufficient information regarding details of the testing performed.

Fall 2010 - Software Testing and Quality Assurance 87

Software Testing Documentation

(IEEE 829 Standard)

Fall 2010 - Software Testing and Quality Assurance 88

IEEE 829 Standard

A standard for a “high level test plan” or “master test plan” IEEE 829 is not about how to test, but how to document testing

activities/tasks. Standard describes eight documents as part of the testing effort

Different documents are generated or used in the different phases of test process

Some documents provide guidelines to better understand and perform testing tasks

It is generic to cover all types of testing Allows the documents to be tailored to testing mission

Fall 2010 - Software Testing and Quality Assurance 89

Software Test Documentation (IEEE 829)

Reference: Lee Copland. A Practitioners Guide to software Test Design

Test Plan

Requirements

Test CaseSpec.

Test Procedure

Test Item TransmittalReport

Test Log Test IncidentReport

Test SummaryReportTest Generation Techniques

Test DesignSpec.

Fall 2010 - Software Testing and Quality Assurance 90

IEEE 829: Documentation for Planning and Preparation Test Plan: A high level view of how testing will proceed; WHAT is

to be tested, by WHOM, HOW, in what TIME frame, to what QUALITY level. all the software testing projects revolve around this document can be used at the system test level or at lower levels

Test Design Specification: It records which features of a test item are to be tested, and how a successful test of these features would be recognized. features (testing requirements) are derived from the requirement

and design documents design specification does not record the values to be entered for a

test, but describes only the requirements for defining those values

Fall 2010 - Software Testing and Quality Assurance 91

IEEE 829: Documentation for Planning and Preparation Test Case Specification : Lists inputs, expected outputs,

features to be tested by this test case, and any other special requirements e.g. setting of environment variables and test procedures a feature from the test design specs may be tested in more than

one Test Case, and a Test Case may test more than one feature. aim is for a set of test cases to test each feature from the Test

Design at least once. dependencies with other test cases are specified here. each test case has a unique ID for reference in other documents.

Fall 2010 - Software Testing and Quality Assurance 92

IEEE 829: Documentation for Planning and Preparation Test Procedure Specification: Describes how the tester will

physically run the test, including set-up procedures

Test Item Transmittal Report: It describes the items being delivered for testing, where to find them, what is new about them, and gives approval for their release a handover document from the previous stage of

development that provides testers a warranty that the items are 'fit for test'. E.g., completion of system testing report for user acceptance testing.

Fall 2010 - Software Testing and Quality Assurance 93

IEEE 829: Documentation for Execution

Test Log: Details of what tests were run, by whom, and whether individual tests passed or failed progress of the testing can be checked helps in finding out the cause of failed test e.g., by

analyzing the sequence of tests

Test Incident Report: Details of incidents where a test 'failed' for a specific reason details include such as actual and expected results, when the

test failed, and any supporting evidence that will help in resolution etc.,

useful for further investigation

Fall 2010 - Software Testing and Quality Assurance 94

IEEE 829: Documentation for Analysis and Follow-up Test Summary: Brings together all pertinent information about

the testing that includes: an assessment about how well the testing has been done, the

number of incidents raised and outstanding. an assessment for deciding whether the quality of the

system is good enough to proceed to another stage based upon detailed information that was documented in

the test plan. recorded for use in future project planning

details of what was done, and how long it took.

Fall 2010 - Software Testing and Quality Assurance 95

Summary

We have dealt with some of the most basic concepts in software testing. Following exercises at the end of this chapter will help you sharpen your understanding.

1.1; 1.2; 1.3; 1.5; 1.6; 1.7; 1.8; 1.9; 1.19.

96

Suggested Reading/Assignment

Ariane 5: Flight 501 Failure

Test Automation Tools: 1.1.2