cse 7314 software testing and reliability robert oshana trip 3 class notes [email protected]

178
CSE 7314 Software Testing and Reliability Robert Oshana Trip 3 class notes [email protected] et

Upload: avis-willis

Post on 27-Dec-2015

221 views

Category:

Documents


0 download

TRANSCRIPT

CSE 7314Software Testing and Reliability

Robert Oshana

Trip 3 class notes

[email protected]

Agenda

• Administrative• Famous software failures• Overview of testing techniques• Black box testing techniques

– Black box science– Black box art

• White box testing techniques– Static white box techniques– Dynamic white box techniques

Infamous software failures

• Disney’s “Lion King” • 1st multimedia CD-ROM• Just in time for Christmas – sales

expectations very huge• CS phones began to ring• Failure to properly test SW on many

different PC models– Worked well on the models that Disney

programmers used to create the game !!

“Software Testing” by Ron Patton

Intel bug

• Try this on your PC:

• (4195835 / 3145727) * 3145727 – 4195835

• If answer is 0, your computer is fine

• Anything else => you have the Pentium bug !!

“Software Testing” by Ron Patton

NASA Mars Polar Lander 1999

• Lander disappeared Dec 3rd 1999• Failure to set a single data bit• NASA tried to save $ by replacing radar with

contact switch• During testing vibration caused switch to trip

(like it had landed)• Lander tested by multiple teams

– Leg fold down procedure (never looked to see if touch down bit set)

– Landing process (always reset computer, clearing bit, before starting testing

“Software Testing” by Ron Patton

What goes into a SW product?

“Software Testing”by Ron Patton

What parts make up a SW product?

“Software Testing”by Ron Patton

Optimal test effort

“Software Testing” by Ron Patton

Precision vs accuracy

• Testing a simulation game like a flight simulator..– Should you test precision or accuracy?

• How about a calculator?

• Depends on what the product is and what the development team is aiming at !!

Neither accurate nor precise

“Software Testing” by Ron Patton

Precise, but not accurate

“Software Testing” by Ron Patton

Accurate, but not precise

“Software Testing” by Ron Patton

Accurate and precise

“Software Testing” by Ron Patton

Try this

• Start the Windows Calculator program

• Type 5,000-5 (comma is important)

• Look at result

• Is it a bug or not??

Quiz

• Q: Given that its impossible to test a program completely, what information do you think should be considered when declaring whether its time to stop testing?

• A: There is no correct answer when to stop testing. Each project is different. Things to consider would be, Are lots of bugs being found?, Is the team satisfied with the number and types of tests that have been run?, Has the product been validated against the users requirements?

Quiz

• Q: If you were testing a flight simulator, what would be more important, accuracy or precision?

• A: Simulator should look and feel like flying a real plane. What’s more important is how accurately the simulator reflects reality, precision can follow. This is exactly what has happened with simulation games over the years.

Quiz

• Q: Is it possible to have a high-quality and low reliability product? What might an example be?

• A: Yes, but it depends on the customers expectations for quality. Example is a high performance sports car that has high quality (fast acceleration, style, etc) but are notoriously unreliable (often breaking down and expensive to repair)

Quiz

• Q: Why is it impossible to test a program completely?

• A: There are too many inputs, too many outputs, and too many path combinations to fully test. Also, software specs can be subjective and be interpreted in different ways (bug is in the eye of the beholder!!)

Quiz

• Q: If you were testing a feature of your software on Monday and finding a new bug every hour, at what rate would you expect to find bugs on Tuesday?

• A: Two axioms: The number of bugs remaining is proportional to the number of bugs already found (won’t come in Tuesday and find the SW perfect!). Pesticide paradox; continuing to run the same tests over and over again means you won’t find more bugs until you start adding more tests!! Result: you’ll continue to find bugs at the same rate or slightly less

Part 2

Black box vs White box

BB and WB testing

• White box or black box testing improves quality by 40%. Together they improve quality by 60%

Black box science techniques

• Equivalence partitioning

• Data testing

• Orthogonal arrays

• Decision tables

• Other techniques

• Quiz

• Black box art

Test to pass and test to fail

• Test-to-pass; assume SW minimally works– Don’t push capabilities– Don’t try to break it– Simple and straightforward test cases

• Would you drive a brand new car model at top speed right off the line?

• Run these first

Test to pass vs Test to Fail

“Software Testing” by Ron Patton

Equivalence partitioning

Equivalence partitioning• A group of tests forms an

equivalence class if you believe that:– They all test the same thing– Of one catches a bug, the others

probably will too– If one doesn’t catch a bug, the others

probably won’t either

Example

• How would you test the “Save as” function?

• Valid characters except \ / : * ? “ < > and |

• 1 to 255 characters

Example

• Equivalence partitions– Valid characters– Invalid characters– Valid length names– Names that are too short– Names that are too long

Data testing

Data testing

• Simple view of software is to divide into two domains– Data (or its domain)– Program

• Examples of data– Keyboard input– Mouse clicks– Disk files– printouts

Divide testing up along the same lines!!

Data testing

• The amount of data can be overwhelming even for a simple program

• Must intelligently reduce test cases by equivalence partitioning– Boundary conditions– Sub-boundary conditions– Nulls– Bad data

Boundary conditions

Boundary value analysis

• Boundaries are often prone to failure• Does it make sense to also test in the

middle?• Procedure

– Test exact boundaries– Value immediately above upper

boundary– Value immediately below lower

boundary

Simple program with bug

1: Rem Create a 10 element integer array2: Rem Initialize each element to –13: Dim data(10) As Integer4: Dim i As Integer5: For i = 1 To 106: data(i) = -17: Next i8: End

“Software Testing” by Ron Patton

Simple program with bug

data(0) = 0data(1) = -1data(2) = -1data(3) = -1data(4) = -1data(5) = -1

“Software Testing” by Ron Patton

data(6) = -1data(7) = -1data(8) = -1data(9) = -1data(10) = -1

Types of boundary conditions

• Numeric

• Character

• Position

• Quantity

• Speed

• Location

• Size

Characteristics of those types

• First/last• Start/finish• Empty/full• Slowest/fastest• Largest/smallest• Next to/farthest

from• Min/max

• Over/under• Shortest/longest• Soonest/latest• Highest/lowest

Testing the boundary conditions

• First-1/Last+1• Start-1/Finish+1• Less then

empty/More than full

• Even slower/Even faster

• Largest+1/Smallest-1

• Min-1/Max+1• Just over/Just

under• Even

shorter/Longer• Even sooner/later• Highest+1/Lowest-

1

Example

• Test entry field that allows 1 to 255

• Try entering 1 character

• Enter 255 characters

• Try 254

• Enter 0

• Enter 256

Program that reads and writes a floppy

• Try saving a file with one entry

• Try saving a file at the limits of what a floppy disk holds

• Try saving an empty file

• Try saving a file that is too big for the floppy

Program to print multiple pages onto a single page

• Try printing one page (the standard case)

• Try printing the most pages allowed

• Try printing zero pages

• Try printing more than it allows

Flight simulator

• Try flying at ground level

• Try flying at maximum height

• Try flying below ground level

• Below sea level

• Outer space

Sub-boundary conditions

• Some boundary conditions are internal to the software and not user apparent

• Powers of two– Bit (0-1), nibble (0-15), byte (0-255),

word, kilo, mega, giga, etc

• ASCII table– Not nice and contiguous

Default, empty, blank, null, zero, and none

• Rather than type wrong information for example, no data is entered (just press return)

• Forgotten in spec and overlooked by programmer

• Happens in real life!• SW should default to lowest value or

some reasonable value

Default, empty, blank, null, zero, and none

• Always have an equivalence partition for these values

Invalid, wrong, incorrect, and garbage data

• Test-to-fail• See if the SW can handle whatever a

user can do• Reasonable to think that some

percentage of people will handle SW incorrectly

• If any data lost, for whatever reason, user will blame SW (bug!)

Invalid, wrong, incorrect, and garbage data

• If SW wants numbers, give it letters

• If SW expects positive numbers, give it negative numbers

• Fat fingers

• How many windows can you have open

• No real rules, be creative, have fun!

State testing

• A software state is a condition or mode that the SW is in

• Other side of testing looks at programs logic flow– States– Transitions between them

• Same type of complexity required partitioning this space as well

Create a state transition map

“Software Testing” by Ron Patton

State diagram parts

• Each unique state that the SW can be in

• Input or condition that takes it from one state to the next

• Set conditions and produced output when a state is entered or exited

Reducing the number of states

• Covering all paths is a traveling salesman problem. Instead;

• Visit each state once• Test common state-to-state

transitions• Test least common state transitions• Test error states and returning from

error states

What to specifically test

• Check all state variables– Static conditions, values, functionality

associated with being in that state or moving to and from the state

• May be something visible like a window or dialog box

• May be invisible such as a part of a communication program of financial package

Example – “dirty document flag”

Testing state to fail

• Race conditions and bad timing– Saving and loading the same document

at the same time with two different programs

– Sharing the same printer, com port, or peripheral

– Using different programs to simultaneously access a common DB

Testing state to fail

• Repetition, stress, and load (not handling the worst case scenario)– Repetition; same operation over and over

(looking for memory leaks)– Stress; running SW under less than ideal

situations (low memory, low disk space, slow CPUs, etc)

• Analyze SW for external resource needs• Test by limiting these to bare minimum• Goal is to starve SW (one type of boundary)

Testing state to fail

– Load testing; opposite of stress testing– Instead of starving, you force feed (more

than it can handle)– Largest possible data files– Connect as many printers,peripherals as

possible– Time (running over long periods) is

another form of load testing

Issues

• Others may not be receptive to efforts to break SW this way– “no customer will use this way…”

• Test automation is required to make some of this practical

CSE 7314Software Testing and Reliability

Robert Oshana

Orthogonal Arrays

[email protected]

Topic for this lecture

• Orthogonal array testing

• Reference; Orthogonal Array Test Strategy (OATS) Technique, by Jeremy M. Harrell

Orthogonal array testing

• A systematic and statistical approach for testing pair-wise interactions

• Provides representative coverage of variable combinations

• Good for integration testing of OO components

• Testing combinations of objects

Orthogonal array testing

• Taguchi methods use these for experimental design

• Two dimensional arrays of numbers– Choosing any two columns provides an

even distribution of all pair wise combinations of values

• Used in many applications for planning experiments– Rows represent the experiments to be run

Orthogonal array testing

• Runs; number of rows which translates to test cases– Since the rows represent an experiment

(test) to be run, it is a goal to minimize the number of row as much as possible

• Factors; number of columns which is the number of variables

• Levels; max number of values for a factor ( 0 – levels-1 )

• Named as LRUNS (Levels**Factors)

Fault model

• Interactions are a major source of defects

• Most defects arise from pairwise interactions (not more complex)

• Easy to miss one of these with so many combinations

• Random selection is not effective

What does OATS do ?

• Guarantees testing pair wise combinations of all selected variables

• Creates an efficient test suite with fewer tests than all combos of all variables

• Even distribution of all variables• Simple to generate, less error prone

L9(3 )

Factors

R

U

N

S

0 0 0 0

0 1 1 2

0 2 2 1

1 0 1 1

1 1 2 0

1 2 0 2

2 0 2 2

2 1 0 1

2 2 1 0

4

Example

• Consider a system that has 4 options and each option has 3 possible values

• Exhaustive approach; 3 x 3 x 3 x 3 = 81 test cases

• OATS approach uses 9 test cases where each test tests a pair wise interaction

• 9/81 = 11%

Technique

• First, determine how many independent variables you need to test for interaction (factors)

• Second, determine how many values (levels) each of these variables will have

• Next, find an orthogonal array that maps to this requirement– At least as many factors from step 1– At least as many levels from step 2

Technique

• Next, map the factors and levels on to the array

• Choose values for the “left over” levels

• Create test cases for each run

Example

• Web page– Three sections– Hidden or visible

• Three independent variables (sections of the page)

• Two levels (invisible, hidden)• L4(2^3) array will work here

– Two levels– Three factors– Number of runs is not necessary

Example

• Map values to array – 0 = hidden– 1 = visible

• No left over values (value mapped to every level in array)

• Create test cases

Example

Orthogonal array before mapping factors

Factor 1 Factor 2 Factor 3

Run 1 0 0 0

Run 2 0 1 1

Run 3 1 0 1

Run 4 1 1 0

Example

Orthogonal array after mapping factors

Top Middle Bottom

Test 1 Hidden Hidden Hidden

Test 2 Hidden Visible Visible

Test 3 Visible Hidden Visible

Test 4 Visible Visible Hidden

Example

• Test cases– Display home page and hide all sections– Display home page and show all but the

Top section– Display home page and show all but the

Middle section– Display home page and show all but the

Bottom section

Another example

C1

C2 C3

S1

S2 S3

M1

M2 M3

foo (M1 )

Another example

• To test all combinations– 3 x 3 x 3 (three clients sending three

messages to three servers)

• Assumes foo( ) can be tested with a single test case (not realistic)

Another example

• Three independent variables– Client– Server– Message class

• Each variable has three values– Message 1– Message 2– Message 3

Another example

• The ideal array is L?(3^3)

• No published array exists

• Look for the smallest array that will do the job

• L9(3^4) orthogonal works in this case– Three levels for the values– Four factors for the three variables

Another example

• Mapping– Client; C1=0, C2=1, C3=2– Server; S1=0, S2=1, S3=2– Message; M1=0, M2=1, M3=2

Example

Orthogonal array before mapping factorsFactor 1 Factor 2 Factor 3 Factor 4

Run 1 0 0 0 0

Run 2 0 1 1 2

Run 3 0 2 2 1

Run 4 1 0 1 1

Run 5 1 1 2 0

Run 6 1 2 0 2

Run 7 2 0 2 2

Run 8 2 1 0 1

Run 9 2 2 1 0

Example

Orthogonal array after mapping factorsClient Server Message

Test 1 C1 S1 M1

Test 2 C1 S2 M2

Test 3 C1 S3 M3

Test 4 C2 S1 M2

Test 5 C2 S2 M3

Test 6 C2 S3 M1

Test 7 C3 S1 M3

Test 8 C3 S2 M1

Test 9 C3 S3 M2

Example

• No left over levels

• One extra factor in original array

• This can be ignored– Still get even distribution

• This created 9 test cases

Complex multi-level cases

• Consider the following system– 5 independent variables (A, B, C, D, E)– A and B have two different values (1,2)– C and D have three different values

(1,2,3)– E has 6 different values

• Total number of test cases = 2 x 2 x 3 x 3 x 6 = 216

Complex multi-level cases

• To find a suitable OA, look in a catalog of arrays for an OA with– At least 6 levels– At least 5 factors

• Smallest is L49(7^8)– 49 test cases instead of 216

• Possible to find another array with fewer test cases– L18(3^6 6^1) => 18 runs

Complex multi-level cases

• Mapping – A, A1=0, A2=1– B, B1=0, B2=1– C, C1=0, C2=1, C3=2– D, D1=0, D2=1, D3=2– E, E1=0, E2=1, E3=2, E4=3, E5=4, E6=5

Complex multi-level cases

OA before mapping

Factor 1

Factor 2

Factor 3

Factor 4

Factor 5

Factor 6

Factor 7

Run 1 0 0 0 0 0 0 0

Run 2 0 1 2 2 0 1 1

Run 3 0 2 1 2 1 0 2

Complex multi-level cases

OA after mapping

A B C D E

Test 1 A1 B1 C1 D1 E1

Test 2 A1 B2 C3 D3 E2

Test 3 A1 2 C2 D3 E3

Complex multi-level cases

• Extra factors can be ignored

• Left over levels are opportunities to add extra test cases

• A and B only have two levels, but three specified in array

• Must provide a value to have a useful test case

Complex multi-level cases

• Choice is arbitrary, but should choose wisely to add the most variety

Complex multi-level cases

OA after mapping

A B C D E

Test 1 A1 B1 C1 D1 E1

Test 2 A1 B2 C3 D3 E2

Test 3 A1 B1 C2 D3 E3

Web site for OA selection and information

• http://www.research.att.com/~njas/oadir/

• Good book on OA “Orthogonal Arrays, Theory and Applications, by A.S.Hedayat, N.J.A Sloane, and John Stufken

Lessons learned from industry application

• Manual application• Focusing on wrong area• Using OATS for minimal efforts• OATS for high risk applications• Picking wrong parameters to combine

• “Orthogonally speaking” by Elfriede Dustin

Decision tables

• List all possible conditions (inputs) and all possible actions (outputs)

• Useful for describing critical components of a system that can be defined by a set of rules

Decision table

Test cases for payroll example

Other BB techniques

• Behave like a dumb user (Ron Manning test)

• Look for bugs where you’ve already found them– If bugs being found in upper

boundaries, keep looking there!– Programmers tend to fix only bug

reported – nothing more!

• Follow experience, intuition, hunches

State transition diagrams

• Old but effective method for describing a system design and guiding our testing

• Functionality dependent on current input and also its past input (state and transitions)

• Transition mapped to requirement• State are expected output

opportunities

Quiz

• Q: T/F, You can perform dynamic black box testing w/o a product specification or requirements document

• A: True. The technique is called exploratory testing and you essentially use the SW as thought it’s the product spec. Risk is that you will not know if a feature is missing.

Quiz

• Q: If you are testing a programs ability to print to a printer, what generic test-to-fail test cases might be appropriate?

• A: Attempt to print with no paper and with jammed paper. Take printer offline, unplug the power, disconnect printer cable, printing with low ink or toner or with a missing cartridge. Look in the manual and see what errors its supposed to provide and attempt to produce them

Quiz

• Q: What boundary conditions exist for the Print Range?

Quiz

• A: Slides option (0 – large number)• Maybe try internal boundaries (254,255,256,1023,

1024, 1025)• Try printing pages 1-8 of a 6 page document (SW

must stop printing at page 6 because its out of data, not because it was told to stop – this is a bit different)

Quiz

• Q: What equivalence partitions would you create for a 10 character zip code in the form of 00000-0000?

Quiz

• A: valid 5 digit zip codes (numeric, not actually in use)

• Valid 9 digit ZIP codes (with a dash)• Short 5 digit (only for digits)• Short 9 digit• Long 5 digit (8 w/o a dash)• Long 9 digit• 10 digits with no dash• Dash in wrong place• More than one dash

Final comments

• Not a lot of industry experience reports (yet)

• OAs with strength greater than 2 require more columns

• Number of pair-wise combinations are much smaller than the cartesian product

• Catalogs of OAs can be found in statistics books and Taguchi books

Black box art

Ad hoc testing

• Based on experience

• Pareto analysis approach

• Risk analysis (importance to the user)

• Problematic situations (boundaries, etc)

• Make sure problem can be replicated

Random testing

• Creating tests where the data is in the format of real data but all of the fields are generated randomly, often using a tool

• Minimally defined parameters– “Monkeys”– “intelligent monkeys”

Random testing weaknesses

• Test often not realistic

• No gauge of actual coverage

• No measure of risk

• Many become redundant

• Lots of time to developed expected results

• Hard to recreate

Semi-random testing

• Refined random testing

• Equivalence partitioning

• Little added confidence to systematic techniques

• May explode if not careful

• “intelligent” monkey

Exploratory testing

• Test design and execution are conducted concurrently

• Results prompt tester to delve deeper

• Not the same as ad-hoc testing

• Good alternative to structured testing techniques

White box testing techniques

• Reviews and inspections

• Checklists

• Dynamic techniques

White box testing

• Look inside a component and create tests based on implementation

Cyclomatic complexity

• From mathematical graph theory

• C = e – n + 2p– e = number of edges in the graph

(number of arrows)– n = number of nodes (basic blocks)– p = number of independent procedures

Example

C = 7 – 6 + 2 ( 1 ) = 3

Code coverage

• Design test cases using techniques discussed

• Measure code coverage

• Examine unexecuted code

• Create test cases to exercise uncovered code (if time permits)

Structure of a test procedure specification

Specification for a typical system-level test

Examining the code: White Box techniques

Why white box techniques?

• Find bugs early

• Find bugs that are hard to find with BB techniques

• Provides good ideas to BB testers

Formal reviews

• Identify problems (wrong and missing)

• Follow rules

• Prepare

• Write a report

Indirect results

• Communications

• Quality

• Team camaraderie

• Solutions

Peer reviews

• Least formal method

• Programmer and one or two others

• Looks for problems and oversights

• Follow basic rules (scaled back)

Walkthroughs

• Next step up in formality

• Formal presentation to small group

• One senior programmer is key

• Question things that are suspicious

• Report written summarizing meeting

Inspections

• Most formal and structured

• Presenter is not original programmer

• Participants (inspectors) look at code from different perspectives

• Other roles (moderator, recorder)

• Re-inspect if necessary

• Good for code and design docs

Coding standards

• Code may operate properly but not adhere to a standard or guideline

• Standards; established, fixed, have to follow rules

• Guidelines; suggested best practices• Required for;

– Reliability– Readability/maintainability– portability

Where to find

• ANSI www.ansi.org

• International Engineering Consortium www.iec.org

• ISO www.iso.ch

• National Committee for Information Technology Standards www.ncits.org

Code checklists

• Data reference errors

• Data declaration errors

• Computation errors

• Comparison errors

• Control flow errors

• Subroutine parameter errors

• I/O errors

Data reference errors

• Is an uninitialized variable referenced?• Is a variable used where a constant

would work better? (checking the boundary of an array)

• Is a variable ever assigned a value that is different than variable type?

• Is memory allocated for referenced pointers?

Data declaration errors

• Are all variables assigned a correct length, type, storage class? (should a variable be declared as a string instead of an array of characters?)

• Are there any variables with similar names? (not a bug)

• Are variables declared within its own module, or is it understood that it is shared with higher module?

Computation errors

• Do any calculations that use variables have different data types, such as adding an integer to a floating point number?

• Do any calculations that use variables have the same data types but are different lengths such as adding a byte to a word?

• Order of evaluation and operator precedence

Comparison errors

• Are comparisons correct? These are very susceptible to boundary condition problems

• Comparisons between fractional and floating point values?

• Does each Boolean expression state what it should?

Control flow errors

• If the language contains statement groups (begin…end, do…while), are the ends explicit and do they match the appropriate groups

• Is there a possibility of a premature loop exit?

• Is it possible that a loop never executes? It is acceptable if it doesn’t?

Subroutine parameter errors

• Does a subroutine alter a parameter that intended only as an input value?

• Do the units of each parameter match the units of each corresponding argument – English vs metric for example

• Proper use of global variables

I/O errors

• Does the SW strictly adhere to the specified format of the data being read or written by the external device?

• If the file or peripheral is not ready or present, is that error condition handled correctly?

Quiz

• Q: T/F Static white box testing can find missing items as well as problems

• A: True, missing items are probably more important than “normal” problems

Quiz

• Q: Besides being more formal, whats the big difference between inspections and other types of reviews?

• A: With inspections, a person other than the original author is the presenter – they must fully understand the software and is much more effective

Dynamic white box testing

Area covered

• Directly testing low level functions (APIs)• Testing SW at the top level and adjusting

test cases based on knowledge of SW’s operation

• Gaining access to read variables and state information to determine if the SW is doing what you think

• Measuring how much of the code is covered and making adjustments

Dynamic white box testing vs debugging

“Software Testing” by Ron Patton

Against testing the whole program

• Difficult and sometimes impossible to find out what caused the problem

• Some bugs hide others

• Your experiences ..?

Unit and integration testing

“Software Testing” by Ron Patton

Drivers used to more effectively test lower level

modules

“Software Testing” by Ron Patton

Test stubs

“Software Testing” by Ron Patton

How would you unit test this?

Sample unit test cases

Input string Output integer value

“1” 1

“-1” -1

“+1” 1

“0” 0

“-0” 0

“+0” 0

“1,2” 1

“abc” 2

“a123” 0

Data coverage

• Involves looking at variables, constants, arrays, data structures, keyboard and mouse input, file, screen input and output, I/O to other devices like modems– Data flow– Sub boundaries– Formulas and equations– Error forcing– Code coverage

Data flow

• Tracking a piece of data completely through the SW

• Use a debugger and watch variables to view data as program runs– As opposed to BB where you only know

the variable at the beginning and at the end

Sub-boundaries – data examples

• A module that computes taxes might switch from using a data table to using a formula at a certain financial cut-off point

• OS running low on RAM may start moving data to a temporary storage on hard drive– Boundary may change depending on

how much space remains on disk

Formulas and equations

• Financial program to compute compound interest

• A = P(1+r/n)^nt• P = principal amount• r = annual interest rate• n = # times interest compounded yearly• t = number of years• A = amount after time t

• White box tester must know to check for n=0 (is n result of another computation? Any way to make n=0?)

Error forcing

• Using a debugger/watch window gives the ability to “force” variables to certain values

• Make sure what you are doing is realistic?– Don’t set n=0 if its already checked at the top

of the loop!!

• Try to force all error messages– Some errors difficult to reproduce, but error

forcing can allow you to check for error message

Code coverage

• Program statement and line coverage

• Branch coverage

• Condition coverage

Quiz

• Q: How does knowing how the software works influence how and what you should test?

• A: If you test only with a black-box view of the software, you won’t know if your test cases adequately cover all parts of the software nor if some of the test cases are redundant

Quiz

• Q: T/F Always design you black box tests first

• A: True. Design your test cases based on what you believe the software is supposed to do. Then use white-box techniques to check them and make them most efficient

Test implementation

Chapter 6

Test implementation process

• Acquiring test data

• Developing test procedures

• Preparing the test environment

• Selecting and implementing the tools used to facilitate process

Test environment

• Collection of various pieces– Data– Hardware configurations– People (testers)– Interfaces– Operating systems– Manuals– Facilities

People

• Not just execution of tests

• Design and creation

• Should be done by people who understand the environment at at certain level– Unit testing by developers– Integration testing by systems people

Test environment

• Collection of various pieces– Data– Hardware configurations– People (testers)– Interfaces– Operating systems– Manuals– Facilities

People

• Not just execution of tests

• Design and creation

• Should be done by people who understand the environment at at certain level– Unit testing by developers– Integration testing by systems people

Hardware configuration

• Each customer could have different configurations

• Develop “profiles” of customers

• Valuable when customer calls with a problem

• If cost limited, create a “typical” environment

Co-habitating software

• Applications that are installed on a PC will have other apps running on them as well

• Do they share common files?

• Is there competition for resources between the applications?

• Inventory and profile

Interfaces

• Difficult to do and a common source of problems once the system is delivered

• Systems may not have been built to work together– Different standards and technology

• Many tests have to be simulated which adds to the difficulty

Source of test data

• Goal should be to create the most realistic data possible

• Real data is desirable• Challenges

– Different data formats– Sensitive– Classified (military)

• Adds to the overall cost

Data source characteristics

Volume of test data

• In many cases a limited volume of data is sufficient

• Volume, however, can have a significant impact on performance

• Mix is also important

Repetitive and tedious tasks

Test tooling traps

• No clear strategy• Great expectations• Lack of buy-in• Poor training• Automating the wrong thing• Choosing the wrong tool• Ease of use• Choosing the wrong vendor

Test tooling traps

• Unstable software

• Doing too much, too soon

• Underestimating time/resources

• Inadequate or unique testing environment

• Poor timing

• Cost of tools

Evaluating testware

• QA group

• Reviews

• Dry runs

• Traceability

Defect seeding

• Developed to estimate the number of bugs resident in a piece of software

• Software seeded with bugs and then tests run to determine how many bugs were found

• Can predict the number of bugs remaining

DefectSeeding

Mutation analysis

• Used as a method for auditing the quality of unit testing

• Insert a mutant statement (bug) into code

• Run unit tests

• Result determines if unit testing was comprehensive or not

Steps in mutation analysis

Configuration testing