a systematic approach for test effort estimation model...

19
A Systematic Approach for Test Effort Estimation Model Selection Ulrike Dowie, Lars Karg SAP AG Software & Systems Quality Conferences 25. April 2007

Upload: vukhanh

Post on 17-May-2018

217 views

Category:

Documents


1 download

TRANSCRIPT

A Systematic Approach for Test Effort Estimation Model SelectionUlrike Dowie, Lars KargSAP AG

Software & Systems Quality Conferences25. April 2007

Selection Approach

A Partly Fictitious Case Study

Motivation and Aims of Our Approach

Criteria to Examine Existing Models

Evaluation

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 3

Motivation and Aims

Deadlines and budgets are missed”Guesstimation” is apparently inadequate to plan testModels and methods for test effort estimation exist, but …

Which one to choose?

Aims: Develop a systematic selection approachTailored to domain, organization, and projectTo facilitate comparison between existing models or methodsTo reduce selection effort after first use of the approachTo select systematically and objectively

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 4

Conceptual Framework

Test Goals

Test Parameters

Test Restrictions

Test Techniques Efficiency and Effects

Affect

Result In

Influence

Support Assessmentand Selection

Support andHelp to Reach

Selection Approach

A Partly Fictitious Case Study

Motivation and Aims of Our Approach

Criteria to Examine Existing Models

Evaluation

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 6

Criteria to Examine and Compare Existing Models (1/2)

1. Requirements on the modelFalsifiabilityAssumptions, hypotheses can be refuted by experience

portability

ObjectivityModel is based on formal process, different persons arrive at the same results

Model maturityNumber of practical applications, diversity of application: e.g., number of different organizations, different domains

Usage experienceUser satisfaction model use continued

UsabilityComprehensibility, adaptability, applicability

Project controlFeedback loop, alternative actions suggested?

Programming language independence

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 7

Criteria to Examine and Compare Existing Models (2/2)

2. Match models and organizational context/project

Goals– Is a goal-oriented process modeled?– Can model user choose among different goals?

Restrictions and parameters– Process model, programming language, available tool

support– Historical data quality and quantity (metrics, number of

projects, etc.)– Human resources (statistical knowledge, experience, etc.)– Test characteristics (comparability of test cases,

structural versus functional testing, etc.)

Assumptions concerning effects– Are assumptions valid in the organizational/project

context?– Results traceable back to causes

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 8

Selection Approach

Test Effort Estimation Models (TEEMs)

Preparation

TEEMs Applicableto Organization

Org. Goals & Restrictions

TEEMs Applicable to Project with Ranking

Project Goals, Restrictions &

Parameters

Domain Restrictions

TEEMs Applicableto Domain

Selection Approach

A Partly Fictitious Case Study

Motivation and Aims of Our Approach

Criteria to Examine Existing Models

Evaluation

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 10

Case Study (1/4)

PreparationSearch for Test Effort Estimation Models (TEEMs)

Analysis of TEEMs according to criteria (see next slide)

Determination of critical criteria– Falsifiable (assumptions can be checked against real

conditions)– Usage experience: must be positive (model is still being

used)– Model maturity: practically applied– Understandable: estimation results must be traceable– Adaptability: parameter determination must be clear– Language independent

Feedback loop to control test efforts: nice to have

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 11

Case Study (2/4)

Calzolari (1998)

Cangussu (2002)

Nageswaran(2001)

Pensyl (2002)

Singpurwalla(1991)

Mod

els

Mod

el Is

fa

lsifi

able

no

no

no

partly

partly

Sneed (2006) no

Mod

el Is

ob

ject

ive

yes

no

no

no

yes

no

Usa

ge

Expe

rienc

e

low

low

medium

low

none

medium

Mod

elM

atur

ity

3

4

3

2

1

2

Usability

yes

Lang

uage

Inde

pend

ent

yes

yes

yes

yes

yes

no

Und

erst

anda

ble

Ada

ptab

ility

Tool

Sup

port

yes no

no pa. no

yes pa. yes

yes pa. no

yes no no

yes no no

PC

Feed

back

Loo

p

Alte

rnat

ive

Mea

sure

s G

iven

no no

yes yes

no no

no no

no no

no no

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 12

Case Study (3/4)

Domain restrictionsCompetitive situation: supplier oligopoly budget and schedule constraints must be kept

Bug corrections possible after market release

Highly variable product usage not all process chains can be tested prioritization of test cases is important

Organizational goals (especially regarding TEEMs)Model results as additional input for resource planning

model must be applicable early in the product life cycle

Organizational restrictions (especially regarding TEEMs)Development process: variant of the V-model

Languages: object-oriented

Test cases: project-specific extent/coverage not comparable

Use case/function point counts: not available

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 13

Case Study (4/4)

Fulfillment of critical criteriaNo model is entirely falsifiable use partly falsifiable modelsOne model with positive usage experience select for further analysis

Matching models with domain restrictionsCompetitive situation budget and schedule constraints are considered by the remaining model

Matching models with organizational goalsModel can be used early in the life cycle (requirements must be defined)

Matching models with organizational restrictionsV-model is supported, object-oriented languages are supportedTest cases should require similar effort assumption violated

No TEEM fits domain and organizational restrictions

Selection Approach

A Partly Fictitious Case Study

Motivation and Aims of Our Approach

Criteria to Examine Existing Models

Evaluation

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 15

Evaluation: Benefits and Use of the Approach

Organizational and project goals need to be identifiedMissing or unclear goals become obviousValuable input for project team’s work

Time dedicated to test effort estimationReasonable, detailed effort estimation is facilitated

Effort estimation not a single person’s task but a group task

Objectivity instead of subjectivityActivities won‘t be forgotten as easily (due to cross-checks)

Project, product, and team characteristics required as input

Project is analyzed more thoroughly and carefully

More reliable test effort estimation(even when no model is appropriate)

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 16

Evaluation: Lessons Learned

Identify successfully applied models only (practical application by itself is insufficient)

Use elimination criteria instead of preselection at domain level

Contact peers (other software developing organizations) and share experienceregarding test effort estimation models

Evaluate local influencing factors of the test effort– Collect data– Interview long-time experienced test

coordinators/managers– Analyze project data and interviews

(graphically, statistically)

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 17

Evaluation: Further Research Directions

Framework needs to be applied in several projects

To reduce time requiredTo find objective measurements to replace subjective measurementsTo extend or reduce requirements

More models need to be identified and analyzed

Falsifiable models: not pretending general applicability, allowing to determine all parameter values locallySuccessfully applied models

Analytical models explaining test effort and its influencing factors are needed

To be cross-checked in the organization

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 18

Bibliography

F. Calzolari, P. Tonella, G. Antoniol, G., Dynamic Model for Maintenance and Testing Effort, Proceedings of the International Conference on Software Maintenance (ICSM'98), pp. 104-112, 1998.

Joao W. Cangussu, Raymond A. DeCarlo, Aditya P. Mathur, A formal model of the software test process, IEEE Transactions on Software Engineering, Vol. 28, No. 8, pp. 782-796, Aug. 2002.

Suresh Nageswaran, Test effort estimation using use case points, Presentation at the Quality Week 2001, San Francisco, 2001.

Jim Pensyl, Test effort estimation for the design and development of manual functional/regression test scripts, 2002, URL: http://www.perform-testing.com/docs/Testimat.doc

Nozer D. Singpurwalla, Determining an optimal time interval for testing and debugging software, Vol. 17, No. 4, pp. 313-319 , Apr. 1991.

Harry M. Sneed, Stefan Jungmayr, Produkt- und Prozessmetriken für den Softwaretest, Informatik Spektrum, Vol. 29, No. 1, pp. 23-39, Feb. 2006.

© SAP AG 2007, Test Effort Estimation Model Selection / Ulrike Dowie, Lars Karg / 19

Copyright 2007 SAP AG. All Rights Reserved

No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice.

Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors.

Microsoft, Windows, Excel, Outlook, and PowerPoint are registered trademarks of Microsoft Corporation.

IBM, DB2, DB2 Universal Database, OS/2, Parallel Sysplex, MVS/ESA, AIX, S/390, AS/400, OS/390, OS/400, iSeries, pSeries, xSeries, zSeries, System i, System i5, System p, System p5, System x, System z, System z9, z/OS, AFP, Intelligent Miner, WebSphere, Netfinity, Tivoli, Informix, i5/OS, POWER, POWER5, POWER5+, OpenPower and PowerPC are trademarks or registered trademarks of IBM Corporation.

Adobe, the Adobe logo, Acrobat, PostScript, and Reader are either trademarks or registered trademarks of Adobe Systems Incorporated in the United States and/or other countries.

Oracle is a registered trademark of Oracle Corporation.

UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group.

Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems, Inc.

HTML, XML, XHTML and W3C are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts Institute of Technology.

Java is a registered trademark of Sun Microsystems, Inc.

JavaScript is a registered trademark of Sun Microsystems, Inc., used under license for technology invented and implemented by Netscape.

MaxDB is a trademark of MySQL AB, Sweden.

SAP, R/3, mySAP, mySAP.com, xApps, xApp, SAP NetWeaver, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all over the world. All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary.

The information in this document is proprietary to SAP. No part of this document may be reproduced, copied, or transmitted in any form or for any purpose without the express prior written permission of SAP AG.

This document is a preliminary version and not subject to your license agreement or any other agreement with SAP. This document contains only intended strategies, developments, and functionalities of the SAP® product and is not intended to be binding upon SAP to any particular course of business, product strategy, and/or development. Please note that this document is subject to change and may be changed by SAP at any time without notice.

SAP assumes no responsibility for errors or omissions in this document. SAP does not warrant the accuracy or completeness of the information, text, graphics, links, or other items contained within this material. This document is provided without a warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability, fitness for a particular purpose, or non-infringement.

SAP shall have no liability for damages of any kind including without limitation direct, special, indirect, or consequential damages that may result from the use of these materials. This limitation shall not apply in cases of intent or gross negligence.

The statutory liability for personal injury and defective products is not affected. SAP has no control over the information that you may access through the use of hot links contained in these materials and does not endorse your use of third-party Web pages nor provide any warranty whatsoever relating to third-party Web pages.