unit 7 chapter 8 testing the programs. unit 7 requirements read chapters 8 and 9 respond to the unit...

Post on 20-Jan-2016

224 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Unit 7Unit 7

Chapter 8Testing thePrograms

Unit 7 RequirementsUnit 7 Requirements

Read Chapters 8 and 9 Respond to the Unit 7 Discussion Board (25 points) Attend seminar/Take the seminar quiz (20 points) Complete your assignment(50 points) Complete learning journal (12 points)

Chapter 8 ObjectivesChapter 8 Objectives

Types of faults and how to clasify themThe purpose of testingUnit testingIntegration testing strategiesTest planningWhen to stop testing

8.1 Software Faults and Failures8.1 Software Faults and FailuresWhy Does Software Fail?Why Does Software Fail?

Wrong requirement: not what the customer wants

Missing requirementRequirement impossible to implementFaulty designFaulty codeImproperly implemented design

8.1 Software Faults and Failures8.1 Software Faults and Failures Objective of TestingObjective of Testing

Objective of testing: discover faultsA test is successful only when a fault is

discovered◦Fault identification is the process of

determining what fault caused the failure◦Fault correction is the process of making

changes to the system so that the faults are removed

8.1 Software Faults and Failures8.1 Software Faults and FailuresTypes of FaultsTypes of Faults

Algorithmic faultComputation and precision fault

◦ a formula’s implementation is wrongDocumentation fault

◦ Documentation doesn’t match what program doesCapacity or boundary faults

◦ System’s performance not acceptable when certain limits are reached

Timing or coordination faultsPerformance faults

◦ System does not perform at the speed prescribedStandard and procedure faults

8.1 Software Faults and Failures8.1 Software Faults and FailuresTypical Algorithmic FaultsTypical Algorithmic Faults

An algorithmic fault occurs when a component’s algorithm or logic does not produce proper output◦Branching too soon◦Branching too late◦Testing for the wrong condition◦Forgetting to initialize variable or set loop

invariants◦Forgetting to test for a particular condition◦Comparing variables of inappropriate data

typesSyntax faults

8.2 Testing Issues8.2 Testing IssuesTesting OrganizationTesting Organization

Module testing, component testing, or unit testing

Integration testingFunction testingPerformance testingAcceptance testingInstallation testing

8.2 Testing Issues8.2 Testing IssuesAttitude Toward Testing Attitude Toward Testing

Egoless programming: programs are viewed as components of a larger system, not as the property of those who wrote them

8.2 Testing Issues8.2 Testing IssuesWho Performs the Test?Who Performs the Test?

Independent test team◦avoid conflict◦improve objectivity◦allow testing and coding concurrently

8.2 Testing Issues8.2 Testing IssuesViews of the Test ObjectsViews of the Test Objects

Closed box or black box: functionality of the test objects

Clear box or white box: structure of the test objects

8.2 Testing Issues8.2 Testing IssuesWhite BoxWhite Box

Advantage◦free of internal structure’s constraints

Disadvantage◦not possible to run a complete test

8.2 Testing Issues8.2 Testing IssuesSidebar 8.2 Box StructuresSidebar 8.2 Box Structures

Black box: external behavior descriptionState box: black box with state

informationWhite box: state box with a procedure

8.2 Testing Issues8.2 Testing IssuesFactors Affecting the Choice of Test Factors Affecting the Choice of Test PhilosophyPhilosophy

The number of possible logical pathsThe nature of the input dataThe amount of computation involvedThe complexity of algorithms

8.3 Unit Testing8.3 Unit TestingCode ReviewCode Review

Code walkthroughCode inspection

8.3 Unit Testing8.3 Unit TestingSidebar 8.3 The Best Team Size for Sidebar 8.3 The Best Team Size for InspectionsInspections

The preparation rate, not the team size, determines inspection effectiveness

The team’s effectiveness and efficiency depend on their familiarity with their product

8.3 Unit Testing8.3 Unit TestingTesting versus ProvingTesting versus Proving

Proving: hypothetical environmentTesting: actual operating environment

8.3 Unit Testing8.3 Unit TestingSteps inSteps in Choosing Test CasesChoosing Test Cases

Determining test objectivesSelecting test casesDefining a test

8.3 Unit Testing8.3 Unit TestingTest ThoroughnessTest Thoroughness

Statement testingBranch testingPath testingDefinition-use testingAll-uses testingAll-predicate-uses/some-computational-

uses testingAll-computational-uses/some-predicate-

uses testing

8.4 Integration Testing8.4 Integration Testing

Bottom-upTop-downBig-bangSandwich testingModified top-downModified sandwich

8.4 Integration Testing8.4 Integration TestingTerminologyTerminology

Component Driver: a routine that calls a particular component and passes a test case to it

Stub: a special-purpose program to simulate the activity of the missing component

8.5 Testing Object-Oriented Systems8.5 Testing Object-Oriented SystemsQuestions at the Beginning of Testing OO SystemQuestions at the Beginning of Testing OO System

Is there a path that generates a unique result?

Is there a way to select a unique result?Are there useful cases that are not

handled?

8.5 Testing Object-Oriented Systems8.5 Testing Object-Oriented SystemsEasier and Harder Parts of Testing OO SystemsEasier and Harder Parts of Testing OO Systems

OO unit testing is less difficult, but integration testing is more extensive

8.6 Test Planning8.6 Test Planning

Establish test objectivesDesign test casesWrite test casesTest test casesExecute testsEvaluate test results

8.6 Test Planning8.6 Test PlanningPurpose of the PlanPurpose of the Plan

Test plan explains◦who does the testing◦why the tests are performed◦how tests are conducted◦when the tests are scheduled

8.6 Test Planning8.6 Test PlanningContents of the PlanContents of the Plan

What the test objectives areHow the test will be runWhat criteria will be used to determine

when the testing is complete

8.7 Automated Testing Tools8.7 Automated Testing Tools

Code analysis◦Static analysis

code analyzer structure checker data analyzer sequence checker

• Output from static analysis

8.7 Automated Testing Tools 8.7 Automated Testing Tools (continued)(continued)

Dynamic analysis◦program monitors: watch and report program’s

behaviorTest execution

◦Capture and replay ◦Stubs and drivers◦Automated testing environments

Test case generators

8.8 When to Stop Testing8.8 When to Stop TestingIdentifying Fault-Prone CodeIdentifying Fault-Prone Code

Track the number of faults found in each component during the development

Collect measurement (e.g., size, number of decisions) about each component

Classification trees: a statistical technique that sorts through large arrays of measurement information and creates a decision tree to show best predictors◦A tree helps in deciding the which components

are likely to have a large number of errors

8.11 What this Chapter Means for You8.11 What this Chapter Means for You

It is important to understand the difference between faults and failures

The goal of testing is to find faults, not to prove correctness

Chapter 9Chapter 9Testing the System

Chapter 9 ObjectivesChapter 9 Objectives

Function testingPerformance testingAcceptance testingSoftware reliability, availability, and

maintainabilityInstallation testingTest documentationTesting safety-critical systems

9.1 Principles of System Testing 9.1 Principles of System Testing System Testing ProcessSystem Testing Process

Function testing: does the integrated system perform as promised by the requirements specification?

Performance testing: are the non-functional requirements met?

Acceptance testing: is the system what the customer expects?

Installation testing: does the system run at the customer site(s)?

9.1 Principles of System Testing9.1 Principles of System TestingTechniques Used in System TestingTechniques Used in System Testing

Build or integration planRegression testingConfiguration management

◦versions and releases◦production system vs. development system◦deltas, separate files and conditional

compilation◦change control

9.1 Principles of System Testing9.1 Principles of System TestingBuild or Integration PlanBuild or Integration Plan

Define the subsystems (spins) to be testedDescribe how, where, when, and by whom

the tests will be conducted

9.1 Principles of System Testing9.1 Principles of System TestingRegression TestingRegression Testing

Identifies new faults that may have been introduced as current one are being corrected

Verifies a new version or release still performs the same functions in the same manner as an older version or release

9.1 Principles of System Testing9.1 Principles of System TestingConfiguration ManagementConfiguration Management

Versions and releasesProduction system vs. development

systemDeltas, separate files and conditional

compilationChange control

9.1 Principles of System Testing9.1 Principles of System TestingSidebar 9.3 Microsoft’s Build ControlSidebar 9.3 Microsoft’s Build Control

The developer checks out a private copyThe developer modifies the private copy A private build with the new or changed

features is testedThe code for the new or changed features

is placed in master versionRegression test is performed

9.1 Principles of System Testing9.1 Principles of System TestingTest TeamTest Team

Professional testers: organize and run the tests

Analysts: who created requirementsSystem designers: understand the

proposed solutionConfiguration management specialists: to

help control fixesUsers: to evaluate issues that arise

9.2 Function Testing9.2 Function TestingPurpose and RolesPurpose and Roles

Compares the system’s actual performance with its requirements

Develops test cases based on the requirements document

9.3 Performance Tests9.3 Performance TestsTypes of Performance TestsTypes of Performance Tests

• Stress tests• Volume tests• Configuration tests• Compatibility tests• Regression tests• Security tests• Timing tests

• Environmental tests

• Quality tests• Recovery tests• Maintenance tests• Documentation

tests• Human factors

(usability) tests

9.4 Reliability, Availability, and 9.4 Reliability, Availability, and MaintainabilityMaintainabilityDefinitionDefinition

Software reliability: operating without failure under given condition for a given time interval

Software availability: operating successfully according to specification at a given point in time

Software maintainability: for a given condition of use, a maintenance activity can be carried out within stated time interval, procedures and resources

9.4 Reliability, Availability, and 9.4 Reliability, Availability, and MaintainabilityMaintainabilityDifferent Level of Failure SeverityDifferent Level of Failure Severity

Catastrophic: causes death or system lossCritical: causes severe injury or major

system damageMarginal: causes minor injury or minor

system damageMinor: causes no injury or system damage

9.4 Reliability, Availability, and 9.4 Reliability, Availability, and MaintainabilityMaintainabilitySidebar 9.4 Difference Between Hardware and Software ReliabilitySidebar 9.4 Difference Between Hardware and Software Reliability

Complex hardware fails when a component breaks and no longer functions as specified

Software faults can exist in a product for long time, activated only when certain conditions exist that transform the fault into a failure

9.5 Acceptance Tests9.5 Acceptance TestsPurpose and RolesPurpose and Roles

Enable the customers and users to determine if the built system meets their needs and expectations

Written, conducted and evaluated by the customers

9.5 Acceptance Tests9.5 Acceptance TestsTypes of Acceptance TestsTypes of Acceptance Tests

Pilot test: install on experimental basisAlpha test: in-house testBeta test: customer pilotParallel testing: new system operates in

parallel with old system

9.6 Installation Testing9.6 Installation Testing

Before the testing◦Configure the system◦Attach proper number and kind of devices◦Establish communication with other system

The testing◦Regression tests: to verify that the system has

been installed properly and works

9.7 Automated System Testing9.7 Automated System TestingSimulatorSimulator

Presents to a system all the characteristics of a device or system without actually having the device or system available

Looks like other systems with which the test system must interface

Provides the necessary information for testing without duplication the entire other system

9.8 Test Documentation9.8 Test DocumentationTest PlanTest Plan

The plan begins by stating its objectives, which should◦guide the management of testing◦guide the technical effort required during

testing◦establish test planning and scheduling◦explain the nature and extent of each test◦explain how the test will completely evaluate

system function and performance◦document test input, specific test procedures,

and expected outcomes

9.8 Test Documentation9.8 Test DocumentationSidebar 9.8 Measuring Test Effectiveness and Sidebar 9.8 Measuring Test Effectiveness and EfficiencyEfficiency

Test effectiveness can be measured by dividing the number of faults found in a given test by the total number of faults found

Test efficiency is computed by dividing the number of faults found in testing by the effort needed to perform testing

9.8 Test Documentation9.8 Test DocumentationTest DescriptionTest Description

Including◦the means of control◦the data◦the procedures

9.8 Test Documentation9.8 Test DocumentationTest Analysis ReportTest Analysis Report

Documents the result of testProvides information needed to duplicate

the failure and to locate and fix the source of the problem

Provides information necessary to determine if the project is complete

Establish confidence in the system’s performance

9.8 Test Documentation9.8 Test DocumentationProblem Report FormsProblem Report Forms

Location: Where did the problem occur?Timing: When did it occur?Symptom: What was observed?End result: What were the consequences?Mechanism: How did it occur?Cause: Why did it occur?Severity: How much was the user or

business affected?Cost: How much did it cost?

9.9 Testing Safety-Critical Systems9.9 Testing Safety-Critical Systems

Design diversity: use different kinds of designs, designers

Software safety cases: make explicit the ways the software addresses possible problems◦failure modes and effects analysis◦hazard and operability studies (HAZOPS)

Cleanroom: certifying software with respect to the specification

9.9 Testing Safety-Critical Systems9.9 Testing Safety-Critical SystemsSidebar 9.9 Software Quality Practices at Sidebar 9.9 Software Quality Practices at Baltimore Gas and ElectricBaltimore Gas and Electric

To ensure high reliability◦checking the requirements definition

thoroughly◦performing quality reviews◦testing carefully◦documenting completely◦performing thorough configuration control

9.9 Testing Safety-Critical Systems9.9 Testing Safety-Critical SystemsSidebar 9.10 Suggestions for Building Safety-Critical SoftwareSidebar 9.10 Suggestions for Building Safety-Critical Software

Recognize that testing cannot remove all faults or risks

Do not confuse safety, reliability and securityTightly link the organization’s software and safety

organizationsBuild and use a safety information system Instill a management culture safetyAssume that every mistakes users can make will be

madeDo not assume that low-probability, high-impacts

event will not happenEmphasize requirements definition, testing, code and

specification reviews, and configuration controlDo not let short-term considerations overshadow

long-term risks and cost

9.10 Information Systems Example9.10 Information Systems ExampleThe Piccadilly SystemThe Piccadilly System

Many variables, many different test cases to consider◦An automated testing tool may be useful

9.10 Information Systems Example9.10 Information Systems ExampleThings to Consider in Selecting a Test ToolThings to Consider in Selecting a Test Tool

CapabilityReliabilityCapacityLearnabilityOperabilityPerformanceCompatibilityNonintrusiveness

9.10 Information Systems Example9.10 Information Systems ExampleSidebar 9.13 Why Six-Sigma Efforts Do Not Apply to SoftwareSidebar 9.13 Why Six-Sigma Efforts Do Not Apply to Software

A six-sigma quality constraint says that in a billion parts, we can expect only 3.4 to be outside the acceptable range

It is not apply to software because◦People are variable, the software process

inherently contains a large degree of uncontrollable variation

◦Software either conforms or it does not, there are no degree of conformance

◦Software is not the result of a mass-production process

9.12 What This Chapter Means for You9.12 What This Chapter Means for You

Should anticipate testing from the very beginning of the system life cycle

Should think about system functions during requirement analysis

Should use fault-tree analysis, failure modes and effect analysis during design

Should build safety case during design and code reviews

Should consider all possible test cases during testing

top related