functional testing

24
Image Area White Paper The ever-increasing complexity of today’s software products, combined with greater competitive pressures and skyrocketing costs of software breakdown have pushed the need for testing to new heights. While the pressures to deliver high-quality software products continue to grow, shrinking development and deployment schedules, geographically dispersed organizations, limited resources, and soaring turnover rates for skilled engineers make delivering quality products the greatest challenge. Faced with the reality of having to do more with less, manage multiple projects and distributed project teams, many organizations are facing innumerable challenges in managing the quality programs for their products. In addition, there is a continuing urge for enhancing the operational capabilities of the teams so as to be able to produce more and more with a reducing investment bucket. This paper illustrates the various challenges faced during different stages of product functional testing life cycle viz. test requirements gathering and management, test planning, test strategizing, test execution and test results reporting along with the best practices, which were institutionalized to cope up with those challenges thereby resulting in an effective and efficient testing process along with a highly satisfied customer. The paper also illustrates a comprehensive measurement model, which was adopted to strive for improvement on a continuous basis. Functional Testing – Challenges & Best Practices www.infosys.com

Upload: rasna-duggal

Post on 24-Dec-2015

11 views

Category:

Documents


0 download

DESCRIPTION

Functional Testing

TRANSCRIPT

Page 1: Functional Testing

Image Area

Whi

te P

aper

The ever-increasing complexity of today’s software products, combined with greater competitive pressures and skyrocketing costs of software breakdown have pushed the need for testing to new heights. While the pressures to deliver high-quality software products continue to grow, shrinking development and deployment schedules, geographically dispersed organizations, limited resources, and soaring turnover rates for skilled engineers make delivering quality products the greatest challenge. Faced with the reality of having to do more with less, manage multiple projects and distributed project teams, many organizations are facing innumerable challenges in managing the quality programs for their products.

In addition, there is a continuing urge for enhancing the operational capabilities of the teams so as to be able to produce more and more with a reducing investment bucket.

This paper illustrates the various challenges faced during different stages of product functional testing life cycle viz. test requirements gathering and management, test planning, test strategizing, test execution and test results reporting along with the best practices, which were institutionalized to cope up with those challenges thereby resulting in an effective and efficient testing process along with a highly satisfied customer.

The paper also illustrates a comprehensive measurement model, which was adopted to strive for improvement on a continuous basis.

Functional Testing – Challenges & Best Practices

www.infosys.com

Page 2: Functional Testing

IntroductionQuality is an increasingly critical factor for software products as customers become more sophisticated, technology becomes more complex, and the software business becomes extremely competitive. Software quality may look like a simple concept, at least in the literature. But, Software quality is not so straightforward in practice: where requirements change so rapidly, projects are continually understaffed and behind schedule.

And definitely, one of the most important criteria for success of any product is to release the right product at the right time.

The quality of a software system is mainly determined by the quality of software process that produced it. Similarly, the quality and effectiveness of software testing are largely determined by the quality of the test processes used.

We have to admit the fact that it may not be practicable to test a product fully. The test coverage provided to a product is limited by the size of your test bank supplemented by some amount of ad-hoc testing. But, due to the increased product complexities in today’s world, the test bank sizes have become huge along with an extensive list of supported hardware-software configurations. The challenge of providing coverage to the supported configuration matrix for the product has become equally important as providing functional coverage. Having said this, we have to be with the fact that we have limited amount of resources/time available for providing this coverage. Hence, the challenge will now transform into: we need to provide an optimum level of test coverage to the product. That is where effective test management comes in to picture.

There are a number of essential questions for testing – questions about product quality, risk management, release criteria, the effectiveness of the testing process and when to stop testing. Measurement provides the answers to these questions. But once we start to think about what can be measured, it’s easy to be overwhelmed with the fact that we could measure almost anything. However, this is not practical and we have to create priorities for measurement based on what measures are critical and will actually be used once we have them.

Quite often in the world of software development, testing remains a low focus area until software implementation has been almost completed. Obviously, this approach to testing is inadequate in light of the increasingly high demands for software quality and shorter release cycles. As a result, the place of testing in the software lifecycle has expanded.

This paper explores the challenges faced during the functional testing for a series of products from a world’s leading Identity and Access Management solution provider along with the practices, which were adopted to cope up with such challenges. The paper also provides an insight in to a comprehensive measurement program established for the project, leading the team towards continuing operational excellence.

02 | Infosys

As Ed Kit has rightly said - “It is fundamental to delivering quality software on time and within budget”.

As stated by Albert Einstein – “Not everything that counts can be counted, and not everything that can be counted counts”

Page 3: Functional Testing

Infosys | 03

The Need for Functional TestingFunctional testing is a means of ensuring that software applications/products work as they should - that they do what users expect them to do. Functional tests capture user requirements in a constructive way, provide both users and developers confidence that the application/product meets those requirements, and enable QA teams to confirm that the software is ready for release.

Functional Testing is an important step for any software development process, whose importance only grows with the complexity of the system being deployed.

Functional Testing - Effectiveness & EfficiencySimilar to the development process, testing requires a systematic approach, which includes requirements definition, test planning, test design, test execution and analysis - to ensure optimum coverage, consistency and reusability of testing assets.

It begins with gathering the testing requirements and continues through designing and developing tests, executing those tests and analyzing product defects. The testing process is not linear and obviously it differs depending on each organization’s practices and methodologies. The fundamental principles of every testing process, however, remain the same.

The fundamental aim of any test manager is to have an effective and efficient method for organizing, prioritizing and analyzing an organization’s entire testing effort while ensuring effective planning and execution for the various stages in the functional testing life cycle:

• Test Requirements Gathering: Define clear, complete requirements that are testable. Requirements management plays a critical role in the testing of software.

• Test Planning: Identify test-creation standards and guidelines, identify hardware/software for the test environment, assign roles and responsibilities, define test schedule and set procedures for executing, controlling and measuring the testing process.

• Test Strategizing: Devise plans for best possible utilization of the resources allocated for the test cycles ensuring optimum test coverage.

• Test Execution: Devise an efficient test execution flow/mechanism with the institutionalization of various tools, reusable artifacts.

• Defect Management: Associated with Test execution is effective defect management. As today’s systems become more complex, so does the severity of the defects. A well-defined method for defect management will benefit more than just the testing team.

• Test Results Reporting: With increased application complexity and significance, more and more people are interested in the quality of a given product/application. By providing visibility in to a product’s health, large sets of stakeholders are able to satisfy themselves as to the expected quality of the product. In addition, senior management and executives are able to easily grasp and act upon critical quality information, acting on issues/exceptions before they turn into real problems. This visibility is only useful if it is easy to find, easy to comprehend and personalized for the individual.

• Test Metrics Collection, Analysis and Improvement: Institutionalize an effective metrics model to gauge the testing process’s health and take improvement actions on a continuous basis.

Page 4: Functional Testing

04 | Infosys

Functional Testing Life Cycle – Challenges & Best PracticesWhile the task of ensuring the functional quality for a product is the ultimate objective, the overall functional testing life cycle is constrained by a number of challenges and operational limitations. I, being the test manager for such a project had to face multiple such challenges, which over a period of time, led me to formulate a number of strategies to deal with all those challenges at the same time enthusing a culture of continuous improvement in to the project.

Let us take a deeper look at all those challenges and associated best practices.

The first phase, we will focus at:

A) Test Requirements Gathering Since the PRS (Product Requirements Specification) and functional specifications for various product features were the only inputs to the test requirements gathering process, following were the major challenges for the testing team during this phase:

Challenges: • Defineclearandcompletetestrequirements

• Managechangestorequirements

Best Practices: • ArrangeforaPRSpresentationbytheproductmanagementteam

• Arrangeforproductfeaturepresentationsfromdevelopmentteam

• Preparetraceabilitymatricesat2levels:

o A high level traceability matrix establishing traceability from requirements mentioned in the PRS to features and vice-versa (Refer figure A-1 below for a snapshot of traceability matrix at level-1)

o At the second level, prepare a traceability matrix for each feature, which establishes the traceability between detailed feature functional requirements to test cases and vice-versa (Refer figure A-2 below for a snapshot of traceability matrix at level-2)

• Getthetraceabilitymatricesreviewedbydevelopmentteamforcompletenessandclarity

• Incaseofanyrequirementchanges,makecorrespondingmodificationstotraceabilitymatricesatbothlevels

• Traceabilitymatricestoserveasastartingpointforthenewtestersdeployedforafeature’stesting

• Insummary,collaboratewithdevelopmentteamthroughoutthetestinglifecyclestartingwithtestrequirementsgatheringstagetillproduct release (Refer figure A-3 below for the collaborative model, which was followed by the project)

Product Name

Requirement Id (as per PRS)

Requirement Description

Feature Name(s)

Traceability Matrix Name (including version)

Test Plan Name (including version)

1.1 <This is my first requirement>

<First Req> First Req-TraceabilityMatrix-1.2

First Req-TestPlan-1.2

.............. .............. .............. .............. ..............

.............. .............. .............. .............. ..............

Figure A-1 – Snapshot of Traceability Matrix at Level-1

Page 5: Functional Testing

Infosys | 05

Benefits:• Clearunderstandingofproductrequirementsachievedthroughpresentationsbydevelopment

• Institutionalizationoftraceabilitymatricesresultedinrequirementscompleteness

• Gaps,ifanyinthetraceabilitymatricesclosedafterreviewbydevelopmentteam

• Reductioninlearningtimeforeachfeaturebythetestersasthetraceabilitymatricesarequickandeasytogothrough

• Effectivemanagementofchangestorequirementsthroughup-to-datetraceabilitymatrices

• ThecollaborativeapproachforQAasindicatedinfigureA-3belowprovedtobereallybeneficialresultinginrequirementsclarityandcompleteness, gap minimization and early gap closure. In reality, the collaborative approach with QA and Development teams working in partnership has been found to be the Mantra for successful execution of testing projects

Feature Name

1.1

..............

..............

Figure A-2 – Snapshot of Traceability Matrix at Level-2

Figure A-3 – A Collaborative Approach for QA

Sr. No. Functional Area

Sub-Functional Area

Functional Operation

Sub-Functional Operation

Functional Test Strategy

Test Case Id Remarks

<This is my first functional area>

..............

..............

<First sub-functional area>

..............

..............

<First functional operation>

..............

..............

<First sub-functional

operation>

..............

..............

Strategy for testing First

sub-functional operation

..............

..............

1.1-A

..............

..............

If any

..............

..............

Page 6: Functional Testing

06| Infosys

B) Test Planning As per the earlier practice, the preparation of test plans was being carried out directly on the basis of feature functional specifications, which led to the following challenges for the testing team during this phase:

Challenges: • Functionalgapsinthetestplans

• Difficultyinreviewbythedevelopmentteamforlargetestplans

Best Practices: • Havethereviewed/approvedtraceabilitymatricesasindicatedinfigureA-2aboveserveasthebasisforsubsequenttestplangeneration

Benefits:• Duetocompletenessoftraceabilitymatrices,functionalgapsinthetestplansarereducedtoaminimum

• Reductionintimerequiredfortestplancreationasthesamearenowcreatedonthebasisoffinalizedtraceabilitymatrices

C) Test Strategizing The product to be functionally tested was really complex one with a large feature set and a number of supported hardware-software configurations. On the other hand, the time/effort available for testing was limited.

Challenges: • Howtoutilizetheavailablelimitedresources/timetoprovideoptimumtestcoveragetotheproductfunctionalitywhilecoveringvarious

supported configurations

• Tomonitortheexacthardware-softwareconfigurationsforthetestenvironment

As James Bach rightly said “The test manager who refuses to face the fact that exhaustive testing is impossible chooses instead to seek an impossible level of testing”.

Hence, we as test managers ought to understand that an optimum level of functional test coverage needs to be provided to the product within available resources.

Following were the major challenges for the test team during this phase:

Figure C-1 – Snapshot of a Consolidated Functional Coverage Matrix

Page 7: Functional Testing

Infosys | 07

Best Practices: • Prioritizationoftestcasesintestplansintolevels-1,2,3and4

• Preparationofaconsolidatedfunctionaltestcoveragematrixforvarioustestcyclestobeexecuted(ReferfigureC-1belowforasnapshotof consolidated functional coverage matrix)

• Preparationofaconsolidatedconfigurationscoveragematrixforvarioustestcycles(ReferfigureC-2belowforasnapshotofconsolidatedconfigurations coverage matrix)

• Utilizeanextensivedecisionmodelwhiledecidingonthetestmatrixforeachcycle(ReferfigureC-3belowforasnapshotoftheQAdecision model in place)

• InstitutionalizeTestAutomation:o Start early on automation: evaluate and finalize on the test automation framework in the test planning stage itself as indicated in figure

A-3 aboveo Start performing the automation feasibility analysis for various features along with test plan preparation itselfo Initiate tools search and evaluation along-side test planningo While the product is under implementation, work rigorously towards getting the automation suite up and readyo Plan to institutionalize daily test harness execution (after creation of a new product build) when the product implementation is mid-wayo The test automation should be portable to be executed in different hardware-software environmentso Try to make maximum utilization of automated test suites, while deciding on your test matrix

• Prepareadetailedtestmatrixforeachtestcycletomonitorthedetailsfortestenvironmenttoaminutelevelandgetthesamereviewedby the development team before proceeding with the testing (Refer figure C-4 below for a snapshot of detailed component and functional test matrix for a test cycle)

Figure C-2 – Snapshot of a Consolidated Configurations Coverage Matrix

Page 8: Functional Testing

08 | Infosys

Benefits:• Testcaseprioritizationinthetestplansenabledthetest

team to plan for breadth-oriented testing for low risk features while focusing in-depth on high risk features

• Creationofconsolidatedfunctionalandconfigurationcoverage matrices enabled the team to plan for whole of the test program at a time and evaluate the test coverage at a glance. This practice resulted in an optimal utilization of the effort available for various test cycles while evaluating pros and cons of inclusion/exclusion of various features into the test matrix

• Institutionalization of theQA decisionmodel led thetest team to plan effectively for upcoming test cycles on the basis of results seen in earlier test cycles

• Startingearlywithtestautomationhelpedtheteamincatchingdefectsearlyintheproductimplementationphase

• Institutionalizationofdailytestharnesshelpedquickdetectionofdefectsandregressionscausedbyrecentcodecheck-ins

• Portabilityoftestharnessenableditsexecutionwithvarioussupportedhardware-softwareconfigurationswithlittleornoefforttherebyresulting in maximum utilization of test harness

• Inclusionofautomatedtestrunsintothetestmatrixenabledthetestteamtoreducetheeffortfortestingradically

• Thepracticeofhavingadetailedcomponentandfunctionaltestmatrixanditssubsequentreviewbydevelopmentteampreventedanyambiguities in setting up the test environment and set an agreement between both development and test teams before going in to actual testing

Figure C-3 – Snapshot of Test Strategizing Decision Model

Figure C-4 – Snapshots of Detailed Component and Functional Test Matrices

Page 9: Functional Testing

Infosys | 09

D) Test Execution This has been one of the most critical phases of the functional testing life cycle. Once we have decided on the overall plan and strategy for test execution, this is the phase when the

test team will really come in to action and will make an optimum utilization of the time and resources allocated for testing. In spite of doing a detailed and effective planning for your test cycles, sometimes test teams fail to accomplish the planned amount of testing during this phase while juggling with various issues related to test environment setup, test case understanding and moreover product areas, which are not testable due to certain blocking functional issues.

Following were the challenges, which the test team had to cope up during this phase:

Challenges: • Testersstrugglingwhile learningtheproductfunctionalitybyusingtheactualproductbuildforthefirsttimeandduetomismatch

between test plans and actual functionality

• A large amount of time going towards test environment setup due to product complexity and an extensive set of supportedconfigurations, which need to be setup

• Recurringissuesrelatedtotestenvironmentsetupandtestexecution,whichtakeenormoustimeinresolvingformosttestersespeciallywho are new entrants to the test team

• Blockingissuesfoundintheproductfunctionalareasleadtoextensivere-planningputtingtheinitialoverallplanatastake

Best Practices: • SetupaprocesstogettheintermediateproductbuildswellbeforetheformalQAhand-off.Thisprocessisalsoknownasin-process

testing (as indicated in figure A-3 above)

• Institutionalizeusageofre-usabletestenvironmentsthroughvarioustoolsviz.VMwareimages,Ghostimagesetc

• Establishaknowledgerepositoryofvariousproblemsencounteredduringtestenvironmentsetup/testexecutionandtheirpossiblesolutions. This repository can be searched quickly especially by novice testers instead of re-inventing the wheel. Anyone in the team, whenever solves a particular problem logs the problem and the corresponding solution in to this repository. Moreover, an email is generated at the same time to the whole team containing the problem and its solution

Benefits:• Startingwiththein-processQAwellbeforethestartofformaltestcyclesenabledthetestteamtolearnthefunctionalitybyplaying

around with the product

• Functionalgapsintheactualproductandthetestplanwereclosedearlybeforegoingintoformalfunctionaltesting

• Testteamwasabletoidentifyandreportdefectsfortheblockingfunctionalareaswellbeforegoingintoformalfunctionaltestcyclesresulting in effective utilization of the time allocated for formal test cycles and well-planned testing

• Anoverallreductionofapprox.25%wasrealizedinthetimegoingtowardstestenvironmentsetupduetothereusabilityachievedwiththeusageofVMware/Ghostimages.Thetestenvironment,setupbyonetestercannowbereusedbymultipleothertesters.Moreover,thesetupsforthirdpartyservers/componentscanbepreservedasVMwareimagesforusageduringsubsequenttestruns

• TheinstitutionalizationofProblems/Solutionsknowledgerepositoryresultedinapprox.30%reductioninthetimegoingtowardsissueresolution. Any tester now facing a problem has to search the repository for a solution before going ahead with spending time on resolving the same

Page 10: Functional Testing

10 | Infosys

E) Defect Management Effective defect tracking, analysis and reporting are critical steps in the testing process. A well-defined method for defect management will benefit more than just the testing team.

Challenges: • Incompleteandambiguousdefectreportingresultingintoomanydefectscomingbackfor“NeedsMoreInformation”fromdevelopment’s

end

• Inconsistencyinthestructureofdefectsreportedbyvarioustestteammembers

• TestersassigningimproperSeverity/Prioritytodefects

• Testersopeningnewdefectsfortheproblemsalreadyreportedbyothertestersasdefectsduringprevioustestcycles

• Toomanydefectsbeingmarkedas“Not-A-Defect”bydevelopmentduetoOperatorerrors

• Inappropriateverificationprocedureforthefixeddefects,whichcomeforverification

• Inadequatetrackingofthevariousdefectsloggedtoalogicalclosure

• Inappropriatetrackingofdefecttrendresultinginlackofinsightintocurrenthealthoftheproduct

Best Practices: • Institutionalize standard templates for defect profile preparation and defect verification profile preparation having comprehensive

details for each defect (Refer figures E-1 (a) and E-1 (b) below for snapshots of these templates)

• EstablishcleardefinitionsforDefectSeverityandPriority.Traintestteammembersonthesameandhaveallofthedefectsgothrougha thorough review by the test lead before reporting

• Maintainanupdatedlistofthevariousdefects loggedagainsttheproducttilldatecontainingappropriatedetailsforthesedefects,which testers can refer to as and when they encounter a defect (Refer figure E-2 for snapshot of sample defects list for a product)

• Coordinateanddiscusswithdevelopmentteamregardingthesuspiciousdefectsbeforereporting

• Establishaprocess,whereinthedevelopmentteamspecifiestheprocedureofverificationforallthefixeddefectsinthedefecttrackingsystem. Also, attach the verification profile for all the verified defects in the defect tracking system

• Institutionalizevariousdefectmetrics,whicharetracked,analyzedandreportedonacontinuousbasistovariousstakeholders(Referfigure E-3 for snapshots of a few defect metrics, which were tracked)

The defect statistics are a key parameter for evaluating the health of the product at any stage during test execution.

Extra time spent on preparing a defect profile and its history often benefits through easier analysis, shorter resolution times, and better product quality. This not only includes the new defects reported against the product, but also the defects reported during earlier test cycles and the defects to be verified.

Following were the challenges, which were faced by the test team while managing the defects for the product:

Figure E-1 (a) Defect Profile Template – A Snapshot

Page 11: Functional Testing

Infosys | 11

Benefits:• Standardtemplatesfordefectprofileanddefectverificationprofileresultedinconsistentandcomprehensivedefectsbeingloggedby

various test team members

• AssignmentofappropriateSeverityandPrioritytovariousdefectsloggedenableddevelopmentteamtofocusonthesedefectsinawell-organized manner

• Referringtothelistofthedefectsloggedagainsttheproducttilldateenabledthetestteamtoavoidany“Duplicate”defectsaswellasto track every defect to a logical closure

• Coordinationwithdevelopmentteamonsuspiciousdefectsbroughtaboutaconsiderablereductioninthenumberofdefectsbeingmarked as “Operator Errors”

• Specifyingtheverificationprocedureinthedefecttrackingsystemenabledthedevelopmentteamtovalidatetheverificationprocedurefollowed by QA for any defect resulting in an increased consistency in the verification procedure

• Institutionalizationofvariousdefectmetricsresultedinanaccurateandquickinsightintotheproduct’shealthandfocuseffortsintheright product areas

FigureE-1(b)DefectVerificationProfileTemplate–ASnapshot

Figure E-2 List of Defects Logged against the Product – A Running List

The severity distribution of defects provided a quick insight in to the product quality.

The priority distribution of defects in conjunction with the severity distribution provided an assessment of release readiness for the product.

Page 12: Functional Testing

12 | Infosys

The functional defect trend for the product enabled various stakeholders to have a quick insight in to product health and to prioritize development efforts

This metric is used to track the various defects logged against the product to a logical closure. The defects, which are still Open are the primary area of concern apart from the ones marked as “Not A Bug”

This metric serves as an indicator of the effectiveness of testing process and test execution. If you see a good percentage of defects marked as “Operator Errors”, it indicates that the understanding of the test engineer on the functionality is low or there have been gaps in the requirements document

Figure E-3 - A Few Defect Metrics

Figure E-2 List of Defects Logged against the Product – A Running List

Page 13: Functional Testing

Infosys | 13

F) Test Results Reporting Once the test team is done with the test cycle or even at intermediate stages, it is desirable to have a visibility into quality of the product. With the increase in both product complexity and

significance, more and more people are interested in the quality of a given software product. By providing this visibility, large sets of stakeholders are able to assure themselves as to the anticipated quality of an application. In addition, senior management and executives are able to easily grasp and take action on critical quality information, acting on exceptions before they turn into tribulations.

Following were the challenges, which were faced by the test team during the test results reporting phase:

Challenges: • Thetestresultsreportshouldbeeasytocomprehendandshouldbeusefulforallthestakeholdersstartingwiththetestersintheteam

to the senior most executive having an interest in the product

• Itshouldprovideastatusofthefeature-wisehealthoftheproductalongwithfeature-wisedefectsdataforaneasydecision-makingtowards Go/No-Go for the product

Best Practices: • Simplemetrics,chartsandgraphsareoftenpreferabletotext,astheyareeasytocomprehend,andbecausetheyhighlightexceptions.

A comprehensive test results report template was prepared in a spreadsheet format, which was then reviewed and approved by the customer containing the following details:

o Component Version Details: This sheet provides the details of versions for various hardware-software used for functional testing

o Test Matrix: This sheet provides the actual test matrix used for testing

(Refer figure C-4 above for a snapshot of detailed component and functional test matrices for the test cycle)

o Result Summary report: This sheet provides the overall summary of the test cases executed and effort spent during the testing for each feature

o New Defect Details: This sheet provides the details for the new defects found during the current test cycle

o Old Defect Details: This sheet provides the details for the defects, which were filed during some earlier test cycles for the product and have also been observed during the current test cycle

o Feature-Risk Analysis: This sheet provides the risk associated with each feature on a scale of High (red), Medium (yellow) and low (green) based on the test cases which have failed, are blocked or were not executed for the feature

o Feature-Test Case Percentage-Chart-<Platform>: This chart provides the distribution of percentage of test cases passed, failed and not executed for each feature on <Platform> platform. There will be one such sheet for each platform tested

o Feature-Test Case Count-Chart-<Platform>: This chart provides the distribution of number of test cases passed, failed and not executed for each feature on <Platform> platform. There will be one such sheet for each platform tested

o Defect Severity Distribution: This chart provides the distribution of defects based on Severity

o Defect Priority Distribution: This chart provides the distribution of defects based on Priority

(Refer figure E-3 above for snapshots of defect severity/priority distribution)

o Effort-Feature-Chart-<Platform>: This chart provides the effort distribution per feature basis executed on <Platform> platform. There will be one such sheet for each platform tested

(Refer figure F-1 below for snapshots of sample graphs/charts included in the detailed test summary report)

Page 14: Functional Testing

Provides an insight in to the features, which have taken more execution time. Helps devise future strategies to reduce the effort for such features

Provides a more detailed feature-wise summary for the stakeholders interested in the same

Provides a high level feature-wise quality picture for the product

14 | Infosys

Provides a risk assessment for each feature from release readiness point-of-view

Figure F-1 – Components of A Detailed Test Summary Report

Benefits:• The comprehensive test summary report provided a

clear visibility in to the product quality while having personalized views for different stakeholders. Along with the metrics, which provided detailed test case, defect and effort counts, there were metrics like Feature Risk Assessment or Feature-wise Percentage Pass/Fail rates, which provided a high-level snapshot of the product health.

• The comprehensive test summary report receivedtremendous appreciations from the customer

Page 15: Functional Testing

Infosys | 15

G) Test Metrics Collection, Analysis & Improvement

Once the test team is done with the test cycle or even at intermediate stages, it is desirable to have a visibility into quality of the product. With the increase in both product complexity and

Test metrics are an important indicator of the effectiveness of a software testing process. Areas for process improvement can be identified based on the analysis of the defined metrics and subsequent improvements can be targeted.

Hence, Test Metrics Collection, Analysis and Improvement is not just a single phase in the testing life cycle; but on the other hand, acts as an umbrella of continuous improvement for whole of the testing life cycle.

Challenges: • Theneedforamechanismformeasuringtheefficiency,effectivenessandqualityofthetestingprocesssoastoidentifytheareasof

improvement

• Theneedformeasuringtheprogram/producthealthobjectively

Best Practices: • Identifyasetofprocess/productmetrics tobetrackedonacontinuousbasis.ReferFigureG-1belowforasummaryof thevarious

metrics institutionalized:

Refer Figures G-2, G-3 and G-4 below for snapshots of test metrics adopted for test process efficiency, effectiveness and quality respectively.

Refer Figures G-5 and G-6 below for snapshots of test metrics institutionalized for Product/Program Health and Defect Tracking respectively.

• Developdashboardsforthesemetricsandsharethesedashboardswiththecustomerat2levels:attestprogramlevel,atregularintervals

• Analyzethesemetricsregularlyandtakeimprovementactions

Figure G-1 – Summary of the Test Metrics Institutionalized

Benefits:• Thedashboardsenabled integrated, accurateandactual reporting,withaholistic viewofmetricsandgraphical charts to facilitate

decision making

• Analysisofthemetricsprovidesaninsightintotheprocessmaturityandtheareas,whereimprovementscanbetargeted

• Datainthegraphicalformiseasilyinterpretable

• CustomergetsaformalizedmechanismtoassesstheeffectivenessandefficiencyoftheQAprocess

• Ausefulwaytoeducatetheteamontheimportanceofvariousprocessparametersandtoeliminatetheproblemareasinaplannedmanner

• AneffectivewaytopresenttoSeniorManagement

Page 16: Functional Testing

Shows manual test execution productivity with respect to upper & lower control limits set for the project – Depicts execution efficiency, helps identify problematic areas & improve, where feasible

16 | Infosys

Test Process Efficiency Metrics in Practice (Figure G-2):

a) Test Execution Productivity

Shows combined (manual + automated) test execution productivity – Depicts execution efficiency, helps identify problematic areas & improve, where feasible

b) Cost of Testing

Shows phase-wise effort distribution – Depicts intensive effort areas to focus for improvement

Page 17: Functional Testing

Infosys | 17

c) Average Defect Turnaround Time

Depicts average verification time taken for defects of Priority-1 – Indicates operational efficiency of the test team and helps in identification of areas for improvement.

(Similar trends are tracked for defects of other priority levels also)

Depicts average response time taken for defects of Priority-1, when the defect is set as “Needs More Info” asking for more information from test team – Indicates operational efficiency of the test team and helps in identification of areas for improvement.

(Similar trends are tracked for defects of other priority levels also)

d) Test Automation Productivity Trends

Shows trends in productivity of test case automation - Depicts changes in performance levels of automation team and helps identify problems, if any.

e) Test Case Automation Trends

Shows trends in the amount of work done by automation team- Helps identify time intervals having lower automation and take remedial actions.

Figure G-2 – Test Process Efficiency Metrics in Practice

Page 18: Functional Testing

18 | Infosys

Test Process Effectiveness Metrics in Practice (Figure G-3):

a) Functional Test Coverage (Feature-wise) Functional Test Coverage (Overall)

Shows feature-wise & priority-wise % test execution –Depicts detailed test coverage at a glance, a mechanism to validate the test strategy

Showsoverall%testexecution–Depictsoverallfunctionaltest coverage at a glance

b) Defects Automated/Added to Test Plans

Depicts – No. of test cycle-wise defects verified, automated and added to test plans. Serves as an operational effectiveness parameter for the test team.

c) Failed Test Cases/Hr, Failed Test Cases/Total Test Cases Executed

Depicts effectiveness of testing as well as cost of catching failures

Page 19: Functional Testing

Infosys | 19

d) Test Automation Coverage

Shows the trends in automation coverage quarter-by -quarter - Depicts current automation coverage and helps focus efforts towards further automation.

Shows the feature-wise savings in execution time achieved through test case automation – Helps evaluate the benefits (ROI) of test automation and share with customer/senior management and decide for future roadmap

e) Effort Savings through Test Automation

Figure G-3 – Test Process Effectiveness Metrics in Practice

Test Process Quality Metrics in Practice (Figure G-4):

a) Percentage of Defects Marked as Operator Errors

Depicts the quality of functional test team’s work. Having more operator errors is an area of concern and needs to be looked into

Figure G-4 – Test Process Quality Metrics in Practice

Page 20: Functional Testing

20 | Infosys

Product/Program Health Metrics in Practice (Figure G-5):

a) Feature Sensitivity

Shows the % of failed test cases for various product features overmultiple test cycles – Depicts sensitive features

b) Feature Sensitivity

Shows feature-wise sensitivity, Depicts defect-prone feature(s)

Figure G-5 – Product/Program Health Metrics in Practice

Page 21: Functional Testing

Infosys | 21

Defect Metrics in Practice (Figure G-6):

a) Test Cycle-wise Defects

Shows new defects logged during various test cycles. Depicts how the product quality faired over multiple test cycles

Depicts (new + old) defects logged during various test cycles. The old defects are the ones, which we reported during an earlier test cycle and are detected during the current test cycle also.

Figure G-6 – Defect Metrics in Practice

Page 22: Functional Testing

R e f e r e n c e s :

22 | Infosys

Continuous improvement is the key to success of any process. Having illustrated the metrics model in place, we will have to continuously enhance the metrics model to strive for continuous improvement.

As H. James Harrington truly said - “The journey towards excellence is a never ending job”.

• SoftwareEngineering:APractitioner’sApproach,6/ebyRogerSPressman,R.S.PressmanandAssociates,ISBN:0072853182,Copyrightyear:2005

• AFrameworkforGoodEnoughTestingbyJamesBach,ReliableSoftwareTechnologies

• SoftwareTestingintheRealWorld–ImprovingtheProcess,byEdwardKit

Page 23: Functional Testing

Infosys | 23

Mandeep Walia

is a Group Project Manager with Infosys Technologies Ltd. He has over 13 years of IT experience encompassing Software Development, Maintenance, Testing and Professional Services. He is certified as a Project Management Professional (PMP) byPMIandaCertifiedSoftwareQualityAnalyst(CSQA)fromQAI,USA.DuringhiscareeratInfosys,Mandeephasmanagedmultiple large and complex software programs for Fortune 500 companies.

About the Author

Appendix

• QA–QualityAssurance

• IT–InformationTechnology

CommonTermsUsedinThisDocument:

Page 24: Functional Testing

© 2013 Infosys Limited, Bangalore, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document.

About Infosys

Infosys partners with global enterprises to drive their innovation-led growth. That's why Forbes ranked Infosys 19 among the top 100 most innovative companies. As a leading provider of next-generation consulting, technology and outsourcing solutions, Infosys helps clients in more than 30 countries realize their goals. Visit www.infosys.com and see how Infosys (NYSE: INFY), with its 150,000+ people, is Building Tomorrow's Enterprise® today.

For more information, contact [email protected] www.infosys.com