risk mitigation trees - review test handovers with stakeholders (2004)

34
Thompson information Systems Consulting Limited www.qualtechconferences.com 1 Risk Mitigation Trees: Review test handovers with stakeholders 12 th European Conference on Software Testing, Analysis & Review 29 November - 03 December 2004 – Köln, Germany Management Track session T6 Neil Thompson Thompson information Systems Consulting Limited www.TiSCL.com

Upload: neil-thompson

Post on 16-Apr-2017

513 views

Category:

Technology


1 download

TRANSCRIPT

Page 1: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com1

Risk Mitigation Trees:Review test handovers with stakeholders

12th European Conference on Software Testing, Analysis & Review29 November - 03 December 2004 – Köln, Germany

Management Track session T6

Neil ThompsonThompson information Systems Consulting Limited

www.TiSCL.com

Page 2: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com2

1. Introduction

• Primary objective of presentation – to share two simple techniques (used successfully on recent projects):– Testing Review Boards (TRBs) and– Risk Mitigation Trees (RMTs)

• Used for making collaborative decisions at development-test handovers, then at go-live

• Go well together, though can be used separately• TRBs are meetings of testing’s stakeholders at handovers• RMTs are a diagramming technique to split residual risk • There is also a secondary objective of this presentation…

Page 3: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com3

Introduction (cont’d)• Secondary objective is to generalise TRBs & RMTs into a

broad framework of decision-making:– so we can better understand the nature of the decisions– so we can improve the process– in future we may be able to benefit from better reliability theories

and more user-friendly statistics;– there are relevant considerations in decision theory & game theory– process overall may be integrated by cybernetics & systems

science (self-organising systems)• Most of the esoteric stuff is in the accompanying paper• This presentation is based on my experiences in large

projects & programmes, but principles should be valid in other handover contexts, eg software product development

Page 4: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com4

Agenda

• 2. Traditional methods of controlling handovers• 3. Judging the best date for handover• 4. Co-operation: Testing Review Boards• 5. Pragmatism: Risk Mitigation Trees• 6. Decision theory• 7. Game theory• 8. Systems theory and cybernetics• 9. Conclusions and way forward

Page 5: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com5

2. Traditional methods of controlling handovers

• “Over the wall”:– bugs may be passed over undetected, amplifying problems later – even if the software works well, could be insufficient collaboration– not only waterfall, could also affect iterative methods (but not

truly Agile methods, which emphasise interpersonal communications)

• Handover certificates:– can languish in in-trays; very often signed with caveats

• Correspondence of exit & entry criteria:– only the simplest waterfall method has no overlap– entry includes own responsibilities in addition to donors– with overlap, entry criteria less demanding than exit

Page 6: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com6

3. Judging the best date for handover

• Risk-benefit balance:– “Good Enough Quality” is popular principle…– but different stakeholders in testing are likely to have different

views of the benefits, and especially the risks– We don’t (yet?) have an objective view of the risks, in terms of the

likely bug rate after go-live, despite the long history of… • Reliability theories:

– dominated the early literature on testing, but have faded– mathematics drown realism– Bayesian techniques seem most promising, but– whole subject still has “holy grail” status for practical use

Page 7: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com7

Judging the best date for handover (cont’d)

• Metrics – use quantitative for a while:– S-curves have empirical evidence and

quasi-theoretical basis– can be applied to both progress

through tests and bug-fixing…– but subject to caveats: uniform test

strategy, “tyranny of numbers”– approach target as a “glide path”…

Fail

Pass

Target tests run

Actual tests run

# tests

date

Closed

Deferred

ResolvedAwaitingfix

# bugs

Cumulative bugs

date

Page 8: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com8

Judging the best date for handover (cont’d)

• Metrics: – that “glide path” takes a long time to reach target, and never totally

smooth (also, target is short of perfection to begin with)– to time the landing (handover), what will be important will be the specific

nature of the shortfalls, and their impacts– so are qualitative metrics better than quantitative?– I say use both: coarse-tune with quantitative, then fine-tune with

qualitative• So, we need mechanisms to interpret metrics and agree

what to do:– (co-operation) Testing Review Boards to review progress and agree

confidence– (pragmatism) Risk Mitigation Trees to analyse residual risk and agree

viable compromises and mitigations

Page 9: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com9

4. Co-operation: Testing Review Boards

• Advantages of TRBs over sign-off forms:– forum for discussing & resolving differences

of view among stakeholders– force a decision on given date– robust shared decisions

TRB attenders TRB # 1 2 3 4 5 6 7BUSINESS SPONSOR ------- ------- ------- ------- ------- attend attendIT OPERATIONS MANAGER ------- ------- ------- ------- attend attend attendPROJECT MANAGER attend attend chair attend chair attend chairTESTING / QUALITY MANAGER chair chair facilitate chair facilitate chair facilitate BUSINESS ARCHITECTS ------- attend attend attend ------- attend attendTECHNICAL ARCHITECTS attend attend ------- attend attend ------- attendDEVELOPMENT MANAGER attend attend ------- attend ------- ------- attend

Development & Unit Testing

Integration Testing

System Testing

UAT

OAT

Live

1

2

3 4

5 6

7

Page 10: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com10

Testing Review Boards (cont’d)

• Imagine the variation in caveats from these different viewpoints!

• Some may not signin time, orat all.

“quality” (or low risk)

low cost

BUSINESS SPONSOR

TESTERS

DEVELOPMENT MANAGER

PROJECT MANAGER

USERscope (of testing done)

speed of testing,timeliness of implementation

TEST MANAGER

BUSINESS ARCHITECT

IT OPERATIONS MANAGER

IT DIRECTOR

Page 11: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com11

• Not anarchy: it should be pre-agreed who is the prime decision-maker at each meeting: the “casting vote”

• That should be the recipient (varies between different TRBs)• So incorporates traditional acceptance hierarchy… (examples)

Testing Review Boards (cont’d)

BUSINESS SPONSORIT OPERATIONS MANAGER

75

entry to Operational (IT) Acceptance Testing go-live

acceptsagrees to receive

PROJECT MANAGER

recommends

give information & opinions

PROJECT MANAGER

recommends

give information & opinions

BUSINESS ARCHITECTS

TECHNICAL ARCHITECTS

USERSOPERATORS TEST MANAGER

TEST MANAGER

ETC.ETC.

Page 12: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com12

Testing Review Boards (cont’d)

• Some obstacles & pitfalls:– too many meetings already!– may be multiple sites, perhaps

multiple companies– difficult to get aligned diary

slots– input information volatile,

right up to the meeting– timing of meeting is finely-

balanced; late disappointments threaten postponement

• Ways to cope:– are any existing meetings less

important?– audio- & video-conferencing– electronic scheduling– deputisation allowed– clerical & tool assistance– take simultaneous snapshots,

even if analysis lags– go-ahead with extra mitigation

is better than postponement

• I’ve seen TRBs work. But may become so popular that managers request also mini-TRBs, and even micro-TRBs!

Page 13: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com13

5. Pragmatism: Risk Mitigation Trees

significant

specificproblems

lo-pri

unresolved bugs

hi-prinot hi-pri

not runrun

any coverageshortfall

coverageOK now

plannedtest coverage

bug acceptance criteria

“zero-bug nirvana”

planned coverage

failedpassed

Tests Bugs

date

etc (fix & retest)

handover target handover target

date

Page 14: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com14

Risk Mitigation Trees (cont’d)

specificproblems

numbers affected

transactions table entries

customers internal users

frequencyof use

businessvalue threat

potentialmitigations

manualworkarounds

fix onfail

limitexposure

delayexposure

confidence infix coming

in time

decisions needed

benefits in having that function,

even if faulty

internal costs

Go / no go

impacts mitigations

Page 15: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com15

Risk Mitigation Trees (cont’d)

• Advantages of RMTs:– they can make a previously daunting and amorphous set of test shortfalls, and

bugs, seem manageable– not only seem manageable, but are manageable (“divide & conquer”)– the specific impacts of shortfalls, and ways to mitigate them, can be (perhaps

surprisingly) easier to make decisions about than arbitrary numbers such as “<3 highs, <20 medium bugs tolerable”

– the numbers of un-run tests and un-fixed bugs are viewed in the light of high (usually) numbers of successful tests and fixed bugs

• Risk & impact mitigation:– options include pre-empt/react, avoid/mitigate, transfer– examples:

• some functions need not be used for several months• some failures would be in such small numbers and so easily seen that fix-on-fail is

less risky than a software upgrade

Page 16: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com16

6. Decision theory

• That’s the end of the straightforward practical advice: now for some interesting background (ideas for future thought and process improvement)

• Decision theory: “a body of knowledge and related analytical techniques of different degrees of formality designed to help a decision-maker choose among a set of alternatives in light of their possible consequences”

• Decisions typically made at TRBs: do we go live, or delay by a week, or delay by a month, or call in the lawyers?

• So can decision theory help us here?

Page 17: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com17

Risk or uncertainty?Different conditions under which decisions are made…

Certainty Risk Uncertainty

AAlternatives B C A B C A B C

a b c a1

a2

b1Consequences

Probability of each consequence

b2

b3

b4

c1

c2

c3

known unknown

p(a1), p(a2), p(b1), p(b2), p(b3), p(b4),p(c1), p(c2), p(c3)

?, ?, ?, ?, ?, ?,?, ?, ?

a1

a2

b1

b2

b3

b4

c1

c2

c3

Page 18: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com18

Uncertainty-Based Testing!

• So all this we’ve been saying about Risk-Based Testing: it should properly be called Uncertainty-Based Testing

• We do not know the probability of each possible consequence

• We don’t even have a fixed set of alternatives: could be various combinations of delay, overtime, descoping etc

• But suppose we could simplify (example as earlier)…

TRB decision

A B C

?, ?, ?, ?, ?, ?,?, ?, ?

a1

a2

c1

c2

c3

c4

b1

b2

b3

Simplifyingframework…

Alternatives:A - go live nowB - delay 1 weekC - delay 1 month

Consequences (additive):1 - cost of future live bugs2 - benefit of going live 3 - cost of longer project4 - penalty payment costs

p(c4) 100%, p(b3) & p(c3) 100%,p(a2), p(b2) & p(c2) uncertain,p(a1), p(b1) & p(c1) unknown

Page 19: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com19

Uncertainty-Based Testing (cont’d)

• “Simplifying” framework has left two core uncertainties:– what will be the benefits of going live at a particular date?– what will be the costs of the bugs which will affect live

operations?• Decision theory offers two main approaches to uncertainty:

– reduce the uncertainty to mere risk by deeming the unknown probabilities “known” by using subjective estimates from experts and/or previous experience

– look at the choice criteria using Game Theory (about which, more later)

• Types of uncertainty:– philosophical analysis (see paper)– related to testing…

Page 20: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com20

Types of uncertainty in testing

• “Knowledge incompleteness due to inherent deficiencies with acquired knowledge”:– we know what we’ve tested, but:

• right things? Any low-payback waste?– we know bugs found, but:

• how many faults remain? Which will trigger failures? Impact?– we know bugs fixed, but:

• how good was our regression testing? Impact of knock-ons?• “Ambiguity, approximations, randomness & sampling”:

– what is signal, what is noise; what trends are statistically significant, eg if high-impact bugs out of testing over last 3 weeks numbered 1, 0 then 2, what does that tell us?

Page 21: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com21

Types of uncertainty in testing (cont’d)

Ideally we’d understand defect-fault-failure chains for allof our existing failures, and prevent/mitigate future mistakes…

Mistake:a human action that produces an incorrectresult (eg in spec-writing, program-coding)

Fault:an incorrect step, process or data definition in a computer program (ie executable software)

Failure:an incorrect result

RISK

RISK

UNCERT’Y

Error: amount by whichresult is incorrect

Defect: incorrect resultsin specifications

RISK

Note: this fits its usagein inspections

Direct programming mistake

RISK

(false alarm): or Change Request,or testware mistakeAnomaly:

an unexpected resultduring testing

RISK OF MISSING

RISK OF MIS-INTERPRETING

Page 22: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com22

Can more trees help?

• Event trees & fault trees can be used to investigate defect-fault-failure relationships, and in theory the risk of future live bugs is the sum of a set of micro-risks (but this kind of analysis is easier after the event than before!)

• Attempts have been made to use decision trees (not the same as RMTs) to assess risk (ref. Shari Lawrence Pfleeger). But:– risks have a probability distribution, not a single definite probability– quantitative risk assessment can have misleading “precision” and

can greatly differ from people’s perceptions– is lo-probab hi-impact really the same as hi-probab lo-impact?– if not, how to tweak the factors?

Page 23: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com23

What about mathematics?

• Decision theory advises us to use utility functions• Utility may be defined as “the real or fancied ability of a

good or service to satisfy a human want”• This sounds like acceptance testing: working backwards,

each handover decision is part of a sequence of decisions• Each decision affects next: eg handover too soon from

development, & resultant disruption may exceed time saved• Simple decision theory inadequate (deterministic, Markov)• Bayesian networks look promising, but are difficult for non-

mathematicians and need much computing power

Page 24: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com24

7. Game Theory• If we can’t (yet?) use reliability theory / utility functions /

Bayesian networks to assess risk-benefit balance at handovers, let’s fall back on the decision theory alternative…

• Game theory:– big in social sciences, and well-established in business strategy– has already been invoked for software development as a whole: a

resource-limited, co-operative, finite, goal-seeking game of invention & communication Alistair Cockburn

– is a research effort to use in automatic test case generation Microsoft

• The relevance to Testing Review Boards is that most or all of the participants are playing a game called Career: the project, and the testing phases, are just subgames

Page 25: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com25

Snakes & Ladders?

USER:↑ Will system make our work

more fulfilling & value- adding?

↓ Will live failures give us extra frustrating work?

DEVELOPER:↑ Will this project help me get

a promotion / interesting new project?

↓ Will someone find faults in my modules after go-live? Will I get recalled?

PROJECT MANAGER:↑ Will I get a reputation as a

skilled risk-balancer?↓ Will I get a reputation as a

deadline-misser / budget- buster?

BUSINESS SPONSOR:↑ Will this project enhance

my career? Will I be seen as a brave decision-taker?

↓ Will I lose respect? Will this project damage the bottom line?

Page 26: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com26

n-person Game Theory• Game Theory: a maths-based

framework for competing individuals or groups to each try to maximise their utility through a series of moves (sequential or simultaneous)

• Competitors may or may not know the moves and utilities of other competitors

• In a TRB, there are more than two “competitors” who may be arguing for different outcomes and have an imperfect view of utilities

• Suggests we need the most complex and abstract part of game theory…

COMPETITOR BETA

AlternativeβA

AlternativeβB

410

51

9 9

30

AlternativeαB

AlternativeαA

units of utilityto each competitor

COMPETITORALPHA

(ii) normal form

strategies:

(i) extensive form

moves:

(iii) characteristic function form

payoffs: >2 COMPETITORS v(x)=…multi-dimensional space, may involve coalitions

COMPETITOR ALPHA

COMPETITOR BETA

αA

αB

βA

βB

βAβBβAβB

αAαBαAαB

α β10, 4 1, 5 9, 9 0, 310, 4 9, 9 1, 5 0, 3

Material combined from Game theory – a critical introduction, Hargreaves Heap & Varoufakis (Routledge 1995); and N-persongame theory, Rapoport (Dover 1970-2001)

Page 27: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com27

But for now: a game of three halves

In effect, can consider TRB as a two-competitor game havingalso a fore-game and an after-game…

COMPETITOR COALITIONBETA(“doves”)

AlternativeA: go

AlternativeB: no-go

hi-U hi-U

illogical &impossible

AlternativeB: no-go

AlternativeA: go

COMPETITORCOALITIONALPHA(“hawks”)

UtilityU = u(a1)+u(a2) or u(b1)+(b2),ALPHA view or BETA view

ok-Uok-U “usual case” but

can’t continue to differ, need to

negotiate

rubber-stampcase

shared-fearcase

ok-Uhi-U

“quality” (or low risk)

low cost

BUSINESS SPONSOR

TESTERS

DEVELOPMENT MANAGER

PROJECT MANAGER

USERscope (of testing done)

speed of testing,timeliness of implementation

TEST MANAGER

BUSINESS ARCHITECT

IT OPERATIONS MANAGER

IT DIRECTOR

• consider preferences (A, B etc) based on views

• form hawk / dove coalitions (real or virtual)

A BGo No-go

Mitigating actions(as from RMT)

Delay how long?Add any scope?

fore-game

after-game

Page 28: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com28

8. Systems theory & cybernetics

Systems sciencesNatural sciences

Philosophy

Social sciences

LogicMetaphysics Epistemology

Mathematics LanguagesGeneralsystemsthinking

“things” “thoughts”

Systems theory(structure)

Cybernetics(function)

Complexity Theory

Decision-maker

Decision Theory

self-organising systems etc

communicationcontrol

ObjectivesDecision Constraints

AlternativesConsequences

Probabilities

certainty / risk / uncertainty

value / utility / gamefunctions functions theory

TRBs &RMTs

Page 29: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com29

9. Conclusions & way forward

• Until reliability models get cleverer / simpler, and more metrics collected, we can’t predict bug rates after go-live

• So main alternatives for managing handovers seem still:– rigid criteria– these Testing Review Board & Risk Management Tree techniques– something even more agile

• To progress to something more scientific, may need to confront the following complexity…

Page 30: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com30

Potential way forward (cont’d): benefits

Extended from material by DeMarco & Lister, Waltzing with bears (Dorset House 2003)

relative probability ofactual go-live

date1 Jan 1 Apr 1 May 31 Dec

estimatedgross benefit per day

of go-live this date

1 Jan

1 Apr

1 May

estimated

total gross benefitof go-live this date

relative probability ofthis benefit

being achieved

minmost likely

max

earliest latest

most likely

50% point

(area under curve)

Page 31: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com31

Potential way forward (cont’d): risk-costs

Extended from material by DeMarco & Lister, Waltzing with bears (Dorset House 2003)

relative probability ofactual go-live

date1 Jan 1 Apr 1 May 31 Dec

estimatedrisk-cost per day

of go-live this date

1 Jan

1 Apr

1 May

estimated

total risk-costof go-live this date

relative probability ofthis risk-cost

being incurred

minmost likely

max

earliest latest

most likely

50% point

(area under curve)

• even more difficult to estimate than

benefits?• much

more “spiky”!

• (note: excludes fixed costs)

Page 32: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com32

SummarySystems sciences

Decision Theory

Testing Review Boards

value / utility / gamefunctions functions theory

Risks & benefits now:1 - bugs visible now2 - stated target benefits3 - cost of longer project4 - penalty payment costs

Risks & benefitsforecast

Risk Mitigation TreesS-curves

decisionsupport

1 - cost of future live bugs2 - benefit of going live

Usable reliability theory

Bayesian networks

Page 33: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com33

References & acknowledgements

• Main references:– Beizer, Boris: Software system testing & quality assurance– Pirsig, Robert M: Lila – an enquiry into morals– Bach, James: Good enough quality etc– DeMarco & Lister: Waltzing with bears– Principia Cybernetica Web– Rapoport, Anatol: N-person game theory– Kaner, Cem: Software testing as a social science– (for others, see associated paper in conference proceedings)

• Acknowledgements:– to Pat, Rob & Rupert for their very considerable input to the Risk

Mitigation Trees method– to all the team on that programme and at that client

Page 34: Risk Mitigation Trees - Review test handovers with stakeholders (2004)

© Thompson information Systems Consulting Limited

www.qualtechconferences.com34

Contact details

Neil Thompson

[email protected]

www.TiSCL.comQuestions?

23 Oast House CrescentFarnham, Surrey, EnglandGU9 0NP, United Kingdom

phone +44 (0)7000 NeilTh (634584) or +44 (0)7710 305907