iwsm2014 mispredicting software reliability (rakesh rana)

17
Consequences of Mispredictions of Software Reliability Rakesh Rana University of Gothenburg, Sweden [email protected]

Upload: nesma

Post on 14-Jun-2015

225 views

Category:

Software


0 download

DESCRIPTION

IWSM Presentation

TRANSCRIPT

Page 1: Iwsm2014   mispredicting software reliability (rakesh rana)

Consequences of Mispredictions of Software Reliability

Rakesh RanaUniversity of Gothenburg,

Sweden

[email protected]

Page 2: Iwsm2014   mispredicting software reliability (rakesh rana)

Predicting Software Reliability

Tracking and predicting quality challenge.

Software defects observable and useful indicator to track and forecast software reliability.

Software reliability measures are primarily used for [1]:• Planning and controlling testing resources

allocation, and• Evaluating the maturity or release readiness.

[1] C.-Y. Huang, M. R. Lyu, and S.-Y. Kuo, “A unified scheme of some nonhomogenous poisson process models for software reliability estimation,” IEEE Trans. Softw. Eng., vol. 29, no. 3, pp. 261–269, 2003.

Page 3: Iwsm2014   mispredicting software reliability (rakesh rana)

Predicting Software Reliability

Software Reliability (SR): is (A) the probability that software will not cause the failure of a system for a specified time under specified conditions. Or (B) the ability of a program to perform a required function under stated conditions for a stated period of time.

Software Reliability Model (SRM) is a mathematical expression that specifies the general form of the software failure process as a function of factors such as fault introduction, fault removal, and the operational environment.

IEEE 1633: Recommended practice on software reliability

Page 4: Iwsm2014   mispredicting software reliability (rakesh rana)

SRGMs: Software Reliability Growth Models

Image: http://flylib.com/books/1/428/1/html/2/files/10fig07.gif

Page 5: Iwsm2014   mispredicting software reliability (rakesh rana)

Research Question

Given the software quality growth prediction curve, what are the consequences of mispredicting the total number of defects and release readiness?

we explicitly recognize two common axes of the accuracy of predictions – (i) the prediction of the asymptote, and (ii) the prediction when the total number of defects is

discovered

Page 6: Iwsm2014   mispredicting software reliability (rakesh rana)

Mispredicting the asymptote

Over-predictions• Too high expectations – more pressure on the testing team. • Assumption that testing is ineffective. • Additional cost of test analyses in search for new test areas (unnecessary).• Risk for postponing release.• Risk for lost time to market.• Risk for wasted costs for testing.• Risk for unnecessary RCAs to find area which are not tested enough.

Page 7: Iwsm2014   mispredicting software reliability (rakesh rana)

Mispredicting the asymptote

Under-predictions• Releasing the product with defects.• Additional costs for post-release defect removal activities and patches.• Defects which are manifested as integration problems requiring quick

fixes.• De-prioritizing testing effort at early stages and thus finding large number

of late (and thus costly) defects during system/acceptance testing.

Page 8: Iwsm2014   mispredicting software reliability (rakesh rana)

Mispredicting release readiness

Early-predictions• Releasing the software with defects.• Higher cost of corrective maintenance of the product.• Postponing the release (if the mispredictions are discovered before the

release).

Page 9: Iwsm2014   mispredicting software reliability (rakesh rana)

Mispredicting release readiness

Late-predictions• Unnecessary additional testing resources to get back on track• Postponing the release in expectation of more defects to come.• Additional costs of test analysis to increase the speed and effectiveness of

testing.

Page 10: Iwsm2014   mispredicting software reliability (rakesh rana)

Mispredicting the shape of the curve

Actual shape Expected shape Concave S-shaped Convex

Concave

Over-prediction of the total number of defects

Over-prediction of the total number of defects

S-shaped

Release readiness is predicted too early

X% of found defects is predicted earlier than expected

Over-prediction of the total number of defects

Convex

Release readiness is predicted too early

X% of found defects is predicted earlier than expected

Too much resources for late testing

Page 11: Iwsm2014   mispredicting software reliability (rakesh rana)

Industrial Validation

Application domain

Software development process

Current methods for software defect prediction

Automotive

V-shaped software development mostly

using sub-suppliers for implementation

Focus on status visualization and analogy

based prediction

Telecom Agile development,

mostly in-house

Various modes of presenting current status and predictions methods

Page 12: Iwsm2014   mispredicting software reliability (rakesh rana)

VCG, Volvo Cars Group

• A number of different metrics are collected and monitored continuously

• Forecasts are used to track release readiness.

• Root cause analysis• focus - what can be done now to get on track?

• It is highly important to meet the release dates, more resources get mobilized and allocated where needed.

• Under-predictions: Setup task force (resource mobilization). • Not seen as major problem (if limited to a few ECUs), potential

problem if widespread across platform (large project).

• Over-prediction: is not seen as a critical problem.

Page 13: Iwsm2014   mispredicting software reliability (rakesh rana)

VCG, Volvo Cars Group

• Early-predictions: No impact if the project is small - as risk can be easily managed at any stage of project.

• For larger platform projects, the forecasts will be re-checked consecutively for a period of time and cross-validated by different expert opinions before resources are planned according to forecasts.

• Late-predictions: Strategy to find areas affected by late-predictions.

• The test resource would be balanced in light of new information and with the aim to meet quality requirements by the release date.

Page 14: Iwsm2014   mispredicting software reliability (rakesh rana)

Ericsson

The impact of mispredictions have two dimensions – (i) metric team which delivers the predictions and (ii) project where the predictions are used.

For the metrics team:• All mispredictions make the team lose trust from the organization.

• Once the organization acts upon wrong predictions the team loses the ability to influence – the next time the organization will need a second opinion before acting.

• This increases the cost of predictions in the long run.

Page 15: Iwsm2014   mispredicting software reliability (rakesh rana)

Ericsson

For the projects:

Over-predictions:• Strengthening and reallocation of resources – if this is done during a

long period of time then this impacts the release date negatively

Under-predictions:• Negative impact on the release date

• Ordered overtime/extra resources – when the organization finds that the reliability was under-predicted

• Reallocation of resources – when the organization finds that the reliability was under-predicted.

Page 16: Iwsm2014   mispredicting software reliability (rakesh rana)

Conclusions

Research objectives

Given the software quality growth prediction curve, what are the consequences of mispredicting the total number of defects and release readiness?

Strategies to avoid mispredictions:

Predict often Experiment with three types of curves Predict the shape of defect inflow using available data

Page 17: Iwsm2014   mispredicting software reliability (rakesh rana)