software testing and reliability reliability and risk assessment aditya p. mathur purdue university...

77
Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @ Guidant Corporation Minneapolis/St Paul, MN Graduate Assistants: Ramkumar Natarajan Baskar Sridharan Last update: August 16, 2002

Post on 19-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and ReliabilityReliability and Risk Assessment

Aditya P. MathurPurdue UniversityAugust 12-16

@ Guidant CorporationMinneapolis/St Paul, MN

Graduate Assistants: Ramkumar NatarajanBaskar Sridharan

Last update: August 16, 2002

Page 2: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 2

Reliability and risk assessment

Learning objectives-

1. What is software reliability?

2. How to estimate software reliability?

3. What is risk assessment?

4. How to estimate risk using application architecture?

Page 3: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 3

References

1. Statistical Methods in Software Engineering: Reliability and Risk, Nozer D. Singpurwalla and Simon P. Wilson, Springer, 1999.

2. Software Reliability, Measurement, Prediction, Application, John D. Musa, Anthony Iannino, and Kazuhira Okumoto, McGraw-Hill Book Company, 1987.

3. A Methodology for Architecture Level Reliability Risk Analysis, S. M. Yacoub and H. H. Ammar, IEEE Transactions on Software Engineering, June 2002, V28, N 6, pp529-547.

4. Real-Time UML: Developing Efficient Objects for Embedded Systems. Bruce Powell Douglass, Addison-Wesley, 1998.

Page 4: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 4

Software Reliability

Software reliability is the probability of failure free operation of an application in a specified operating environment and time period.

Reliability is one quality metric. Others include performance, maintainability, portability, and

interoperability

Page 5: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 5

Operating Environment

Hardware: Machine and configuration

Software: OS, libraries, etc.

Usage (Operational profile)

Page 6: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 6

Uncertainty

Uncertainty is a common phenomena in our daily lives.

In software engineering, uncertainty occurs in all phases of the software life cycle.

Examples:

•Will the schedule be met?

•What is the number of faults remaining faults?

•How many testers to deploy?

•How many months will it take to complete the design?

Page 7: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 7

Probability and statistics

Uncertainty can be quantified and managed using probability theory and statistical inference.

Probability theory assists with quantification and combination of uncertainties.

Statistical inference assists with revision of uncertainties in light of the available data.

Page 8: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 8

Probability Theory

In any software process there are known and unknown quantities.

The known quantities constitute history and is denoted by H.

The unknown quantities are referred to as random quantities.

Each unknown quantity is denoted by a capital letter such as T or X.

Page 9: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 9

Random Variables

When a random quantity can assume numerical values it is known as a random variable.

Specific values of T and X are denoted by lower case letters t and x and are known as realizations of the corresponding random quantities.

Example: If X denotes the outcome of a coin toss, then X can assume a value 0 (for head) and 1 (for tail). X is a random variable under the assumption that on each toss the outcome is not known with certainty.

Page 10: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 10

Probability

P (E |H) For brevity we will suppress H and to denote the

probability of E as simply

)(EP

The probability of an event E computed at time in light of

history H is given by

Page 11: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 11

Random Events

Examples:

A random quantity that may assume one of two values, say e1 and e2, is a random event often denoted by E.

• Program P will fail on the next run.

• The design for application A will be completed in less than 3 months.

• The time to next failure of application A will be greater than t.

• Application A contains no errors.

Page 12: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 12

Binary Random Variables

A discrete random variable is one whose realizations are countable.

When e1 and e2 are numerical values, such as 0 and 1, then E is known as a binary random variable.

Example: Number of failures encountered over four hours of application use.

A continuous random variable is one whose realizations are not countable.

Example: Time to next failure.

Page 13: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 13

Probability distribution function

)(xF X

If E is the event that X <=x then is known as the distribution function of X and is denoted as

)( xXP

For a random variable X let E be the event that X=x

0)( xXP If then X is said to have a point mass.

Note that is nondecreasing in x and ranges from 0 to 1.

)(xF X

Page 14: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 14

Probability density function

)(xF X)(xF X

If X is continuous and takes all values in some interval I and is differentiable with respect to x for all x in I , then is absolutely continuous.

)(xF X The derivative of at x is denoted by and is

known as the probability density function of X.)(xf X

)(xf X dx is the approximate probability that the random

variable X takes on a value in the interval x and x+dx.

Page 15: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 15

Exponential Density function: Continuous random variable

xe , for both x and > 0.

0

)|( xf

x

xexXP )|(

Page 16: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 16

Binomial Distribution

Suppose that an application is executed N times each with a distinct input. We want to know the number of inputs, X, on which the application will fail.

Note that the proportion of the correct outputs is a measure of the reliability of the application.

X can assume values x =0, 1, 2,…,N. We are interested in the probability that X=x.

Each input to the application can be assumed to be a Bernoulli trial. This gives us Bernoulli random variables Xi, i=1, 2,…,N. Each Xi is a 1 if the application fails and 0 otherwise. Note that X= X1+X2+…+XN.

Page 17: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 17

Binomial Distribution [contd.]

Nxppx

NpxXP xNx ,...,0,)1()()|(

Under certain assumptions, the following probability model, known as the Binomial distribution, is used.

Here p is the probability that Xi = 1 for i=1,…,N. In other words, p is the probability of failure of any single run.

))!(!/(!)( xNxNx

N

Page 18: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 18

Poisson Distribution

.....2,1,0,!

)|( xx

exXPx

When the application under test is almost error free and is subjected to a large number of inputs, then N is large, (1-p) is small, and N (1-p) is moderate.

The above assumption leads to a simplification of the Binomial distribution into the Poisson distribution given by the formula

Page 19: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 19

Software Reliability: Types

Reliability on a single execution: P(X=1|H), modeled by Bernoulli distribution.

Reliability over N executions: P(X=x|H), for x=0,1,2,…N, given by Binomial distribution or Poisson distribution for large N and small parameter value p.

Reliability over an infinite number of executions: P(X=x|H), for x=1,2,…N. Note that we are interested in the number of inputs after which the first failure occurs. This is given by geometric distribution.

Page 20: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 20

Software Reliability: Types [contd.]

When the inputs to software occur continuously over time, then we are interested in P(X>=x|H), i.e. the probability that the first failure occurs after x time units. This is given by the exponential distribution.

The time of occurrence to the kth failure can be given by the Gamma distribution.

There are several other models of reliability, over one hundred!

Page 21: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 21

Software failures: Sources of uncertainty

Uncertainty about the presence and location of defects.

Uncertainty about the use of run types. Will a run for a given input state cause a failure?

Page 22: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 22

Failure Process

Inputs arrive at an application at random times.

Some inputs cause failures and others do not.

T1, T2, …denote (CPU) times between application failures.

Most reliability models are centered around the interfailure times.

Page 23: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 23

Failure Intensity and Reliability

Failure intensity is the number of failures experienced within a unit of time. For example, the failure intensity of an application might be 0.3 failures/hr.

Failure intensity is an alternate way of expressing reliability, R(), which is the probability of no failures over time duration .

For a constant failure intensity we have R()=e-. It is safe to assume that during testing and debugging, the

failure intensity decreases with time and thus the reliability increases.

Page 24: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 24

Jelinski and Moranda Model [1972]

The application contains an unknown number N of defects.

Each time the application fails the defect that caused the failure is

removed.

Ti is proportional to (N-I+1).

Debugging is perfect.

Constant relationship between the number of defects and the failure rate.

Page 25: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 25

Jelinski and Moranda Model [contd.]

Thus, given 0=S0<=S1<=….<=Si, i=1, 2… and some constant c, we obtain the following failure intensity, where S0,S1,…,Si are supposed software failure times, failure rate rTi is given by:

)1()( 1 iNcStr iTi 1 iStfor

Note that the failure rate drops by a constant amount.

ttimeS0=0 S1 S2 S3

)(tr

Page 26: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 26

Musa-Okumoto Model: Terminology

Execution time:

Initial failure intensity: 0=f K 0

Average number of failures at a given time::

Total number of failures in infinite time: 0= 0 / B

Fault reduction factor: B

Per fault hazard rate: ; 0 / 0= B

Execution time from current time: ’

Page 27: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 27

Musa-Okumoto Model: Terminology [contd.]

Number of inherent faults: 0=I I

Number of source instructions: I

Instruction execution rate: r

Executable object instructions: I

Linear execution frequency: f=r/I

Fault exposure ratio: K

Number of inherent faults per source instructions: I

Page 28: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 28

Musa-Okumoto: Basic Model

Failure intensity for basic execution time model

() 0[1 0

]

() 0e00

R( ' | ) e{ [ 0e00

][1 e

00

'

]}

Page 29: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 29

Musa-Okumoto: Logarithmic Poisson Model

Failure intensity decay parameter: Failure intensity for logarithmic Poisson model:

() 0e

() 0

0 1

R( ' | ) [ 0 1

0( ') 1]1/

Page 30: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 30

Failure intensity comparison as a function of average failures experienced

0

0

Average number of failures experienced

Fail

ure

inte

nsit

y (

)

Logarithmic Poisson model

Basic model

Page 31: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 31

Failure intensity comparison as function of execution time

0

0

Execution time

Fail

ure

inte

nsit

y (

)

Basic model

Logarithmic Poisson model

Page 32: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 32

Which Model to use?

Uniform operational profile: Use the basic model

Non-uniform operational profile: Use the logarithmic Poisson model

Page 33: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 33

Other issues

Counting failures

When is a defect repaired

Impact of imperfect repair

Page 34: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 34

Independent check against code coverageR

elia

bili

ty e

stim

ate

Code coverage

CL CH

RL

RH

CL RL Unreliable estimate

CL RH Unreliable estimate

CH RL Reliable estimate

CH RH Reliable estimate

Page 35: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 35

Operational Profile A quantitative characterization of how an application will be used. This

characterization requires a knowledge of input variables.

Input state is a vector of values of all input variables.

Input variables: An interrupt is an input variable and so are all environment variables and variables whose values are input by the user via the keyboard or from a file in response to a prompt.

Internal variables, computed from one or more input variables are not input variables.

Intermediate results and interrupts generated during the execution as a result of the execution should not be considered as input variables.

Page 36: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 36

Operational Profile [contd.] Runs of an application that begin with identical input states belong to the same

run type.

Example 1: Two withdrawals from the same person from the same account and of the same dollar amount.

Example 2: Reservations made for two different people on the same flight belong to different run types.

Function: Grouping of different run types. A function is conceived at the time of requirements analysis.

Page 37: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 37

Operational Profile [contd.]

Function: A set of different run types. A function is conceived at the time of requirements analysis. A function is analogous to a use-case.

Operation: A set of run types for the application that is built.

Page 38: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 38

Input Space: Graphical View

Input space

Input state

Input state

Input state

Function 1

Input state

Input state

Input state

Function 2

Function 3

Function 4Function k

Page 39: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 39

Functional Profile

Function Probability of occurrence

F1 0.6

F2 0.35

F3 0.05

Page 40: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 40

Operational Profile

Function Operation Probability of occurrence

F1 O11 0.4

O12 0.1

O13 0.1

F2 O21 0.05

O22 0.15

F3 O31 0.15

O33 0.05

Page 41: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 41

Modes and Operational Profile

Mode Function Operation Probability of occurrence

Normal F1 O11 0.4

O12 0.1

O13 0.1

F2 O21 0.05

O22 0.15

F3 O31 0.15

O33 0.05

Page 42: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 42

Modes and Operational Profile [contd.]

Mode Function Operation Probability of occurrence

Administrative AF1 AO11 0.4

AO12 0.1

AF2 AO21 0.5

Page 43: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 43

Reliability Estimation Process

App. ready for release

Develop Operational profile

Collect failure data

Perform system test

Objective met?

Compute reliability

Remove defects

No

Yes

Page 44: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 44

Risk Assessment Risk is a combination of two factors:

Probability of malfunction Consequence of malfunction

Dynamic complexity and coupling metrics can be used to account for the probability of a fault manifesting itself into a failure.

Risk Assessment is useful in identifying: Complex modules that need more attention Potential trouble spots Estimating test effort

Page 45: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 45

Question of interest

Given the architecture of an application, how does one quantify the risk associated with the given architecture?

Note that risk analysis, as described here, can be performed prior to the development of any code and soon after the system architecture, in terms of its components and connections, is available.

Page 46: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 46

Risk Assessment Procedure

Develop System Architecture

Determine component and connector complexity

Develop operational scenarios and their likelihood

Develop risk factors

Perform severity analysis

Perform risk analysis

Develop CDG

Page 47: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 47

Cardiac Pacemaker: Behavior Modes

L1

What is paced?

A: Atrium

V: Ventricle

D: Dual; (both)

Behavior mode indicated by 3-letter acronym: L1L2L3

L2

Which chamber isbeing monitored ?

A: Atrium

V: Ventricle

D: Dual; (both)

L3

What is the mode type?

I: Inhibited

T: Triggered

D: Dual pacing

Example: VVI: Ventricle is paced when Ventricular sense does not occur, pace is Inhibited if a sense does occur

Page 48: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 48

Pacemaker: Components and Communication

ReedSwitch

Communication Gnome

CoilDriver Atrial

ModelVentricular Model

enables

enables

heart

magnet

programming

Page 49: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 49

Component Description Reed Switch (RS): Magnetically activated switch; must be

closed before programming can begin.

Coil Driver (CD): Pulsed to send 0’s and 1’s by the programmer.

Atrial Model (AR): Controls heart pacing.

Communications Gnome (CG): Receives commands as bytes from CD and send to AR and VT.

Ventricular Model (VT): Controls sensing and the refractory period.

Page 50: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 50

Scenarios Programming: Programmer sets the operation mode of the

device.

AVI: VT monitors the heart. When a heart beat is not sensed the AR paces the heart and a refractory period is in effect.

AAI: The AR component paces the heart when it does not sense any pulse.

VVI: VT component paces the heart when it does not sense any pulse.

VVT: VT component continuously paces the heart.

AAT: The AR component continuously paces the heart.

Page 51: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 51

Static Complexity for OO Designs

Coupling Between Classes (CBC): Total number of other classes to which a class is coupled.

Coupling: Two classes are considered coupled if methods from one class use methods or instance variables from other class.

Page 52: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 52

Operational Complexity for Statecharts

Dynamic complexity factor for each component is based on cyclomatic complexity of the statechart specification for each component.

Given a program graph G with e edges and n nodes, the cyclomatic complexity V(G)=e-n+2.

For each execution scenario Sk a subset of the statechart specification of the component is executed thereby exercising state entries, state exits, and fired transitions.

The cyclomatic complexity of the executed path for each component Ci is called the operational complexity denoted by cpxk (Ci ).

Page 53: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 53

Dealing with Composite States

init t11

s11

Is1 init

s22

Is2

s21

t13

VGx (s11) VGa (t11) VGx (s1) VGa (t12) VGe (s1) VGa (t13) VGe(s22)

Cyclomatic complexity for the s11 to s22 transition:

VGp: p: x, a, e: Complexity of the exit, action, and entry code segments code segment

t12

Page 54: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 54

Dynamic Complexity for Statecharts

For each execution scenario these variables are updated with the complexity measure of the thread that is triggered for that particular scenario.

Each component of the model is assigned a complexity variable.

At the end of the simulation, the tool reports the dynamic complexity value for each component.

The average operational complexity is now updated for each component:

||

1

)()(S

kikki CcpxPSCcpx

kPS is the probability of scenario k, S is the total number of scenarios

Page 55: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 55

Component Complexity Sequence diagrams are developed fo each scenario. Each sequence diagram is used to simulate the corresponding

scenario.

Average operational complexity is then computed as a sum of the scenario component complexity weighted by the scenario probability

Simulation is used to compute the dynamic complexity of each component.

Domain experts determine the relative probability of occurrence of each scenario. This is akin to the operational profile of an application.

The component complexities are then normalized against the highest component complexity.

Page 56: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 56

Export coupling, ECk( Ci , Cj ), measures the coupling for component Ci

with respect to component Cj . It is the percentage of the number of

messages sent from Ci to Cj with respect to the total number of

messages exchanged during the execution of scenario Sk .

Connector Complexity

The export coupling metric for a pair of components for a given scenario is extended for an operational profile by averaging over all scenarios using the probabilities of occurrences of the scenarios considered.

EC(Ci,C j ) PSk ECk (Ci,C j )k1

|S |

Page 57: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 57

Connector Complexity Simulation is used to determine the dynamic coupling measure

for each connector.

Coupling amongst components is represented in the form of a matrix.

Coupling values are normalized to the highest coupling.

Page 58: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 58

Component Complexity Values

RS CD CG AR VT

Programming (0.01) 8.3 67.4 24.3

AVI (0.29) 53.2 46.8

AAT (0.15) 100

AAI (0.20) 100

VVI (0.15) 100

VVT (0.20) 100

% of architecture complexity

.083 .674 .243 50.248 48.572

Normalized .0.002 0.013 0.005 1 0.963

Page 59: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 59

Coupling Matrix

RS CD CG AR VT Prog. Heart

RS .0014 .0014

CD .003 .011

CG .002 .0014 .0014

AR .25 1

VT .27 .873

Programmer .0014 .006

Heart .123 .307

Page 60: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 60

Severity Analysis

Risk factors are associated with each component and connector by performing severity analysis.

Basic failure mode(s) of each component/connector and its effect on the overall system is studied using failure mode and effects analysis (FMEA).

A simulation tool is used for injecting faults, one-by-one in each component and each connector.

The effect of each fault, and the resulting failure, is studied. Domain experts can rank severity of failures, thus ranking the effect of a component or connector failure

Apart from their complexity, risk also depends on the severity of failure of components and connectors.

Page 61: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 61

Severity Ranking

Catastrophic (0.95): Failure may cause death or total system loss.

Critical (0.75): Failure may cause severe injury, property damage, system damage, or loss loss of production.

Marginal (0.5): Failure may cause minor injury, property damage, system damage, delay or loss of production.

Minor (0.25): Failure not serious enough to cause injury, property damage, or system damage but will result in unscheduled maintenance or repair.

Domain experts assign severity indices (svrtyi) to the severity classes.

Page 62: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 62

Heuristic Risk Factor

The highest severity index (svrtyi) corresponding to a severity level of failure of a given component i, is assigned as its severity value.

By comparing the result of the simulation with the expected operation, severity level for each faulty component for a given scenario is determined.

A Heuristic Risk Factor (hrfi) is then computed for each component based on its complexity and severity value.

iii svrtycpxhrf

Page 63: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 63

FMEA for components (sample)

Component Failure Cause Effect Criticality

RS Communication not enabled

Error in translating magnet command

Unable to program the pacemaker, schedule maintenance task.

Minor

VT No heart pulses are sensed though heart is working fine.

Heart sensor is malfunctioning.

Heart is paced incorrectly; patient could be harmed.

Critical

Page 64: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 64

FMEA for connectors (sample)

Connector Failure Cause Effect Criticality

AR-Heart Failed to pace the heart in AVI mode.

Pacing h/w device malfunction.

Heart operation is irregular

Catastrophic

CG-VT Send incorrect command (e.g. ToOff instead of ToIdle)

Incorrect interpretation of program bytes

Incorrect operation mode and pacing of the heart. Device still monitored by the physician, immediate maintenance required.

Marginal

Page 65: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 65

Component Risk factors: Using Dynamic Complexity

RS CD CG AR VT

Dynamic complexity .002 .013 .005 1 .963

Severity .25 .25 .5 .95 .95

Risk factor .0005 .00325 .0025 .95 .91485

Page 66: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 66

Connector Risk factors: Using Dynamic Complexity

RS CD CG AR VT Prog. Heart

RS .00035 .00035

CD .00075 .00275

CG .0005 .0007 .0007

AR .2375 .95

VT .2565 .82935

Prog. .00035 .0015

Heart .11685 .2916

Page 67: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 67

Component Risk factors: Using Static Complexity

RS CD CG AR VT

CBC 0.47 0.8 1 0.6 0.6

Severity 0.25 0.25 0.5 0.95 0.95

Risk factors based on CBO

0.119 0.2 0.5 0.57 0.57

Page 68: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 68

Component Risk factors: Comparison

Dynamic metrics better distinguish AR and VT components as high risk when compared with RS, CD, and CG.

Using static metrics, CG is considered at the same risk level as AR and VT.

In pacemaker, AR and VT control the heart and hence are the highest risk components which is confirmed when one computes the risk factors using dynamic metrics.

Page 69: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 69

Component Dependency Graphs (CDGs)

A CDG is described by sets N and E where N is a set of of nodes and E is a set of edges. s and t are designated as the start and termination nodes and belong to N.

Each node n in N: <Ci, RCi, ECi>, where Ci is the component corresponding to n, RCi is the reliability of Ci , and ECi is the average execution time of Ci .

Each edge e in E : <Tij , RTij , PTi j>, where Tij is the transition from node Ci to Cj, RTij is the reliability of this transition, and PTi j is the transition probability .

In the methodology described here, risk factors replace the reliabilities of components and transitions.

Page 70: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 70

Generation of CDGs

Estimate the execution probability of each scenario.

For each scenario estimate the execution time of each component and then, using the probability of each scenario, compute the average execution time of each component.

Estimate the transition probability of each transition.

Estimate the complexity factor of each component.

Estimate the complexity factor of each connector..

Page 71: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 71

CDG for the Pacemaker (all transition labels not shown)

<RS, 0.0005, 5>

<CD, 0.003, 5><CG, 0.0025, 5> <Heart, 0, 5>

<AR, 0.95, 40>

t

<, 3.5x10-4 , 0.002>

<Prog, 0, 5>

<VT, 0.95,40>

s<, 0 , .0.35>

<, 0 , 0.34>

<, 0.29 , 0.64>

Page 72: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 72

Reliability Risk Analysis

Architecture risk factor is obtained by aggregating the risk factors of individual components and connectors.

Example: Let L be the length of an execution sequence, i.e,. L is the number of components executed along this sequence. Then, the risk factor is given by:

L

iihrfHRF

1

)1(1

where hrfi is the risk factor associated with the ith component, or connector, in the sequence.

Page 73: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 73

Risk Analysis Algorithm-OR paths

Breadth expansions correspond to “OR” paths. The risk factors associated with all nodes along the breadth expansion are summed up weighted by the transition probabilities.

e1: <(s,n1), 0, 0.3>

e2: <(s,n2), 0, 0.7>

n1: <(C1, 0.5, 5 >

n2: <(C2, 0.6, 12 >

HRF=1-[(1-0.5)0.3+(1-0.6)0.7].

s

n1 n2

e1 e2

Traverse the CDG starting at node s and stop until either t is reached or the average application execution time is consumed.

Page 74: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 74

Risk Analysis Algorithm-AND paths

The depth of a path implies sequential execution. For example, suppose that node n1 is reached from node s via edge e1 and that node n2 is reached from n1 via edge e2. Attributes of the edges and components are as follows:

e1: <(s,n1), 0, 0.3>

e2: <(s,n2), 0, 0.7>

n1: <(C1, 0.5, 5 >

n2: <(C2, 0.6, 12 >

HRF=1-[(1-0.5)0.3 x (1-0.6)0.7]. Time=Time+5 +12

s

n1

n2

e1

e2

The “AND” paths take into consideration the connector risk factors (hrf ij)

Page 75: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 75

Pacemaker Risk

Given the architecture and the risk factors associated with components and connector, the risk factor associated with the pacemaker is computed to be approx. 0.9.

This value of risk is considered high. It implies that the pacemaker architecture is critical and failures are likely to be catastrophic.

Risk analysis tells us that the VT and AR components are the highest risk components

Risk analysis also tells us that the connectors between VT, AR and heart components are the highest risk components

Page 76: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 76

Advantages of Risk Analysis

The CDG is useful for the risk analysis of hierarchical systems. Risks for subsystems can be computed. These could then be aggregated to compute then risk of the entire system.

The CDG is useful for performing sensitivity analysis. One could study the impact of changing the risk factor of a component on the risk associated with the entire system.

As the analysis is being done, most likely, prior to coding, one might consider revising the architecture or use the same architecture but allocate resources for coding and testing based on individual risk factors.

Page 77: Software Testing and Reliability Reliability and Risk Assessment Aditya P. Mathur Purdue University August 12-16 @Guidant Corporation Minneapolis/St Paul,

Software Testing and Reliability © Aditya P. Mathur 2002 77

Summary

Reliability, modeling uncertainty, failure intensity, operational profile, reliability growth models, parameter estimation.

Risk assessment, architecture, severity analysis, risk factors, CDGs.