impact of software testing metrics on software …€¦ · software metric characterizes a software...

19
http://www.iaeme.com/IJCET/index.asp 108 [email protected] International Journal of Computer Engineering & Technology (IJCET) Volume 8, Issue 4, July-August 2017, pp. 108126, Article ID: IJCET_08_04_013 Available online at http://www.iaeme.com/ijcet/issues.asp?JType=IJCET&VType=8&IType=4 Journal Impact Factor (2016): 9.3590(Calculated by GISI) www.jifactor.com ISSN Print: 0976-6367 and ISSN Online: 09766375 © IAEME Publication IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE MEASUREMENT Monika Research Scholar, DCSE, GJUS & T, Hisar, India Pradeep Kumar Bhatia Professor, DCSE, GJUS & T, Hisar, India ABSTRACT Suppose, we have developed software. Now we have to check its quality and functionality. Where does this newly developed software stands among previously released software of similar functionality? Is it better or worse? How can we further improve its quality and performance?How do we find out answer to all above questions? This is the point where we make use of metrics. Software metrics measures a software providing information about quality, completeness, performance to the developer. This information can further help the organization/developer to further improve their software in many aspects.In this paper, testing metrics are being studied in the different views. The testing metrics can be classified w.r.t. the process, project and product testing metric and in the other way manual, automation, common and performance testing metrics. To see what classification includes many types of testing metrics, we can combined different types and see to its effects. Key word: Software Testing Metrics, Tool Command Language, Test case productivity, Requirement stability index, Test execution, Performance test efficiency. Cite this Article: Monika, Pradeep Kumar Bhatia, Impact of Software Testing Metrics on Software Measurement. International Journal of Computer Engineering & Technology, 8(4), 2017, pp. 108126. http://www.iaeme.com/ijcet/issues.asp?JType=IJCET&VType=8&IType=4 1. INTRODUCTION Metricis defined as a unit of measurement that computes results. Metric used for evaluation and quantification of a software are called as software metrics. This information is further used for improvisation of processes, products and services. Software metric characterizes a software system and places them at a specific position on a software quantitative scale. Now a day, Software measurement has become an essential and crucial part of software development. With the help of software measurement, we can focus on specific characteristics of software and improvise them. However, further involvement of software measurement is still required as far as quality evaluation is concerned [4, 5, 6, 7]. Most of the

Upload: others

Post on 19-Oct-2020

17 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

http://www.iaeme.com/IJCET/index.asp 108 [email protected]

International Journal of Computer Engineering & Technology (IJCET)

Volume 8, Issue 4, July-August 2017, pp. 108–126, Article ID: IJCET_08_04_013

Available online at

http://www.iaeme.com/ijcet/issues.asp?JType=IJCET&VType=8&IType=4

Journal Impact Factor (2016): 9.3590(Calculated by GISI) www.jifactor.com

ISSN Print: 0976-6367 and ISSN Online: 0976–6375

© IAEME Publication

IMPACT OF SOFTWARE TESTING METRICS

ON SOFTWARE MEASUREMENT

Monika

Research Scholar, DCSE, GJUS & T, Hisar, India

Pradeep Kumar Bhatia

Professor, DCSE, GJUS & T, Hisar, India

ABSTRACT

Suppose, we have developed software. Now we have to check its quality and

functionality. Where does this newly developed software stands among previously

released software of similar functionality? Is it better or worse? How can we further

improve its quality and performance?How do we find out answer to all above

questions? This is the point where we make use of metrics. Software metrics measures

a software providing information about quality, completeness, performance to the

developer. This information can further help the organization/developer to further

improve their software in many aspects.In this paper, testing metrics are being studied

in the different views. The testing metrics can be classified w.r.t. the process, project

and product testing metric and in the other way manual, automation, common and

performance testing metrics. To see what classification includes many types of testing

metrics, we can combined different types and see to its effects.

Key word: Software Testing Metrics, Tool Command Language, Test case

productivity, Requirement stability index, Test execution, Performance test efficiency.

Cite this Article: Monika, Pradeep Kumar Bhatia, Impact of Software Testing

Metrics on Software Measurement. International Journal of Computer Engineering &

Technology, 8(4), 2017, pp. 108–126.

http://www.iaeme.com/ijcet/issues.asp?JType=IJCET&VType=8&IType=4

1. INTRODUCTION

Metricis defined as a unit of measurement that computes results. Metric used for evaluation

and quantification of a software are called as software metrics. This information is further

used for improvisation of processes, products and services. Software metric characterizes a

software system and places them at a specific position on a software quantitative scale.

Now a day, Software measurement has become an essential and crucial part of software

development. With the help of software measurement, we can focus on specific

characteristics of software and improvise them. However, further involvement of software

measurement is still required as far as quality evaluation is concerned [4, 5, 6, 7]. Most of the

Page 2: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Impact of Software Testing Metrics on Software Measurement

http://www.iaeme.com/IJCET/index.asp 109 [email protected]

use of software metrics focuses mainly on cost assessment. However, we all know there are a

number of aspects of a software which needs to be measured and having them measured can

provide us a great deal of improvisation.

The essential qualities of a software metrics

Easy to understand

Precise and complete data

Selection of metrics based on importance of stakeholders

Data corresponding to either amendment or decision making

Independent, Robust, reliable and valid Consistent and used over time

2. SOFTWARE METRICS LIFE CYCLE

Various steps involved in life cycle of software metric are as following:

Categorization and prioritization of metrics.

Classification of project specific metrics

Collection of data for processing of metrics

Communication with stakeholder/customers

Capturing and verification of data

Analyzing and dispensation of data

reporting

Figure 1 Analytical Lifecycle of metric

3. SOFTWARE TESTING

Software testing is a process to make sure of correctness of software i.e. the software

developed is up to the standards set at the time of planning and is able to uphold expectations

of the customers[2, 3].

There are basically two categories of software testing:

Manual testing: Test cases are prepared and executed manually by concerned testing

engineer[1, 11]. It is anlaborious work that requires patience, skill, creativeness, innovative

approach and open mindedness.

DEFINE

MEASURE

ANALYZE ACTION

IMPROVE/ ELIMINATE

Page 3: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Monika, Pradeep Kumar Bhatia

http://www.iaeme.com/IJCET/index.asp 110 [email protected]

Major problems of manual testing are:

very time consuming,

requirement of high investment in HR,

low reliability,

non- programmable and error prone

Figure 2 Types of Software Testing

Automation testing: Programming languages such as JavaScript, Python or TCL (Tool

Command Language) are used for preparation and execution of test cases. Automation can

reduce human effort and cost to a great extent [9].

Major benefits of automation testing are:

cost effectiveness, reusability and repeatable

use of programming languages

easy to understand, more reliability

4. TYPES OF SOFTWARE METRICS

Software metrics are classified into many categories. One method is to classify them in

Manual testing metrics, Performance testing metrics, Automating testing metrics, and

Common testing metrics. In another type of classification, they are categorised as project,

process and product metrics.

4.1. Testing Metrics Classification I

There are mainly four types of software testing:

Manual testing metrics

Performance testing metrics

Automating testing metrics

Common testing metrics

Page 4: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Impact of Software Testing Metrics on Software Measurement

http://www.iaeme.com/IJCET/index.asp 111 [email protected]

Figure 3 Software Testing Metric

4.1.1. Manual Testing Metrics

Test case productivity Metrics (TCP): It provides information about test progression. It

represents number of test cases designed per hour.

Test case productivity = Total No of test steps/efforts in hours

For example:

Table 1 Test case productivity

Test case name No of steps

A1 44

A2 39

A3 26

A4 37

A5 24

Total no 170

Total effort taken = 9 hours

TCP = 170/9 = 18.88 steps/ hours

Test case productivity of previous releases can be compared graphically for better

understanding. An example is shown below in figure 4.

Page 5: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Monika, Pradeep Kumar Bhatia

http://www.iaeme.com/IJCET/index.asp 112 [email protected]

Figure 4 TC Productivity Trend

Test execution summary metrics: It represents the status of test cases whether failed,

succeeded or not run. Figure 5 show an example of a pie chart showing statistical summary of

test execution result of test cases of software.

Figure 5 Summary Trend

Defect Acceptance Summary: It concludes total no of valid test whose identification is done

under execution process.

Defect acceptance = [no of valid defects/total no of defect] * 100

Table 2 Defect Acceptance of Various Releases

Releases Valid defect Total defects Defect acceptance (%)

I 44 92 48

II 65 102 64

III 67 123 54

IV 60 87 68

Like Test case productivity, defect acceptance of previous releases can also be compared

graphically for better understanding. An example is shown below in figure 6.

Figure 6 Defect Acceptance Trend

9.7 11.5

13.8

18.8

0

10

20

Mar/17 Apr/17 May/17 Jun/17

ste

ps/

ho

ur

Releases

TC PRODUCTIVITY

Pass 55%

Fail 25%

Not run 20%

Test execution summary

0

50

100

I II III IV

48 64 54

68

Pe

rce

nta

ge

Defect Acceptance

Page 6: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Impact of Software Testing Metrics on Software Measurement

http://www.iaeme.com/IJCET/index.asp 113 [email protected]

Defect Rejection: It concludes total no of defect rejected during execution process.

Defect rejection = [no of rejected defects/total no of defect] * 100

Table 3 Defect Rejection of Various Releases

Releases Invalid defect Total defects Defect rejection (%)

I 13 80 16

II 14 83 17

III 17 128 13

IV 21 100 21

Figure 7 Defect Rejection Trend

A comparison of defect rejection percentage of previous release is shown in figure 7.

Bad fix Defect Metrics: Defects identified in testing process are resolved by the testing

engineer. If by any chance, concerned defect is fixed but resolution is done in such a wrongly

manner that it gives rise to a new defect occurrence, and then it is called bad fix defect

metric.So this metric concludes effectiveness of defect resolution process.

Bad fix defect = [no of bad fix defects/total no of valid defects] / 100

Test execution Productivity metrics: It determines average no of tests executed per day. It

may be defined as executions per day.

Test execution productivity = [Total no of test cases executed (Te)/ execution effort in hours]

* 100

Where

Te= BTC + [T(1)*1] + [T(0.33)*0.33] + [T(0.66)*0.66]

BTC = no of test cases executed atleast once

T(1)= no of test cases retested with 71% to 100% of total test case steps

T(0.66) = no of test cases retested with 41% to 70% of total test case steps

T(0.33) = no of test cases retested with 1% to 40% of total test case steps

0

10

20

30

I II III IV

16 17 13

21

Per

cen

tag

e

Defect Rejection

Page 7: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Monika, Pradeep Kumar Bhatia

http://www.iaeme.com/IJCET/index.asp 114 [email protected]

Table 4 Test execution productivity

TC Base run

effort

Rerun 1

status

Rerun 1

effort (1hr)

Rerun 2

status

Rerun 2

effort (1hr)

Rerun 3

status

Rerun 3

effort (1hr)

A1 1.3 T(0.66) 1 T(0.33 0.52 T(1) 2

A2 1.6 T(0.33) 0.4 T(1) 2

A3 2 T(1) 1.2

A4 2.3 T(1) 2

A5 2.7

Table 5 Types of Test Case and Values

Base test case 5

T(1) 4

T(0.66) 1

T(0.33) 2

Total efforts (hr) 19.65

Te = [5 + (1*4) + (1*0.66) + (2*0.33)] = 10.32

Test execution productivity = [10.32/19.65]*8 = 4.2 exe/day

A comparison of previous releases has been shown in figure 8.

Figure 8 Test Execution Productivity Trend

Test efficiency: It determines how efficient the testing team to identify the defects is. It also

may be defined as the no of defects missed out during testing process.

Test efficiency = [DT/ (DT+DU)]*100

where

DT = no of valid defects identified during testing

DU = no of valid defects identified after release of software by the user.

Figure 9 shows an example test efficiency pattern of previous releases.

3.1 3.5

4.2

0

1

2

3

4

5

Apr/17 May/17 Jun/17

Tes

t ex

ecu

tio

n

pro

du

ctiv

ity

Releases

Page 8: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Impact of Software Testing Metrics on Software Measurement

http://www.iaeme.com/IJCET/index.asp 115 [email protected]

Figure 9 Test Efficiency Trend

Defect Severity index (DSI):It represents quality of the software i.e. under test and at the

time of release of product. Release of product depends highly upon this metric.

Figure 10 Defect Severity index Trend (all status)

Defect Severity index = ∑ (Severity index*no of valid defects for this severity)/total no of

valid defects

DSI can be categorised into two types:

DSI for all status defects – It represents quality of product under testing process

DSI for open status defects – It represents quality of product when released.

DSI (open) = ∑ (Severity index*no of open valid defects for this severity)/total no of

open valid defects.

Figure 11 Defect Severity index Trend (for open status)

100

96

98

949596979899

100101

Apr/17 May/17 Jun/17P

erc

en

tage

Releases

0

5

10

15

Low Medium High Critical

4 6

8

11

no

. of

de

fect

s

DSI for all status = 2.5

0

1

2

3

Low Medium High Critical

1 1

2

3

No

. o

f d

efe

cts

Severity level

DSI for open status = 3.0

Page 9: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Monika, Pradeep Kumar Bhatia

http://www.iaeme.com/IJCET/index.asp 116 [email protected]

4.1.2. Performance Testing Metrics

Performance scripting productivity: It quantifies performance of the script written for

testing. An example is shown in figure 12.

Performance scripting productivity = total no of operation performed/efforts in hours

Table 6 Types of operations

Operation performed total

No. Of clicks 20

No of checking point 7

No of input parameters 8

No of relationship pattern 10

Total operations 45

Total effort for writing script = 9 hours

Performance scripting productivity = 45/9 = 5 operations /hour

Figure 12 Product Scripting Productivity Trend

Performance execution summary: It represents the status of test cases whether failed,

succeeded or not run.

Performance execution data client side: This metric observes client side keenly and

provides data such as :

No of active users

No of transactions per minute

Responding time

Average spending time

Errors per minute

Entries per minute

Performance execution data server side: This metric observes client side keenly and

provides data such as :

CPU Usage

Total connections per minute

Data transfer per minute

Memory usage

2.6

3.9

3.7 4.3

5

0

2

4

6

Mar/17 Apr/17 May/17 Jun/17 Jul/17

op

erati

on

s/h

ou

r

Performance Scripting Productivity

Page 10: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Impact of Software Testing Metrics on Software Measurement

http://www.iaeme.com/IJCET/index.asp 117 [email protected]

Performance test efficiency (PTE): It determines how efficient is the testing team to

identify the defects. It also may be defined as the percentage of requirements met at release of

product.

PTE = [(Demands met during PT) – (Demands met at release)]/ demands met during PT

Data required for quantification of this metric is as:

Average no of transactions per minute

Average Responding time

Average spending time

Errors per minute

Entries per minute

Figure 13 Performance Test Efficiency Trend

For e.g. if

Demands met during PT = 5

Demands not met after signoff of PT = 1

PTE = [(5/(5+1)]*100 = [5/6]*100 = 83%

Performance severity index: It represents quality of the software i.e. under test and at the

time of release of product. Decision of release of product depends highly upon this metric.

4.1.3. Automation Testing Metrics

Automation scripting productivity (ASP) :It quantifies performance of the script written

for automation. An example is shown in figure 8.

Performance scripting productivity = total no of operation performed/efforts in hours

Examples of operations can be as follows:

Number Of clicks

Number of checking point

Number of input parameters

Number of relationship pattern

90

96

83

75

80

85

90

95

100

Apr/17 May/17 Jun/17

Per

cen

tag

e

Releases

Performance Test Efficiency

Page 11: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Monika, Pradeep Kumar Bhatia

http://www.iaeme.com/IJCET/index.asp 118 [email protected]

Table 7 Types of operations

Operation performed total No. Of clicks 20

No of checking point 5

No of input parameters 5

No of relationship pattern 10

Total operations 40

Total effort for writing script = 10 hours

Performance scripting productivity = 40/10 = 4 operations /hour

Figure 14 Automation scripting productivity trend

Automation Test Execution Productivity metrics: It determines average no of automated

tests executed per day. It may be defined as executions per day.

Test execution productivity = [Total no of test cases executed (ATE)/ execution effort in

hours] * 100 Where

ATE= BTC + [[T(0.33)*0.33] + [T(0.66)*0.66 + T(1)*1]]

BTC = no of test cases executed atleast once

T(1)= no of test cases retested with 71% to 100% of total test case steps

T(0.66) = no of test cases retested with 41% to 70% of total test case steps

T(0.33) = no of test cases retested with 1% to 40% of total test case steps

Automation coverage: This metrics tell us about how many manual test cases have been

automated using automation technique[13].

Automation coverage = No of manual test cases automated/total no of manual test cases

For example, if there are 500 total manual test cases and out of which 400 have been

automated, then automation coverage will be [400/500]*100 i.e. equal to 80%.

Cost comparison metrics:This metric provides an estimate of cost difference between

manual testing and automation testing [14, 15].

Cost (Manual) = execution efforts in hours/ billing rate

Cost (auto) = tool purchase cost (single time investment) + maintenance cost + script

development cost + efforts in hours/ billing rate

2.8 3.1

4

0

1

2

3

4

5

Apr/17 May/17 Jun/17

op

era

tio

ns/

ho

ur

AUTOMATION SCRIPTING PRODUCTIVITY

Page 12: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Impact of Software Testing Metrics on Software Measurement

http://www.iaeme.com/IJCET/index.asp 119 [email protected]

4.1.4. Common Testing Metrics

Effort variance (EV): This metrics tells us about percent deviation between estimated effort

and actual effort.

EV = [(actual effort – estimated effort)/estimated effort]*100

Table 8 Effort Variance Values

Project name Actual effort Estimated effort Effort variance

Project 1 106 100 6

Project 2 79 85 -7.1

Project 3 100 106 -5.6

Project 4 200 191 4.5

Figure 15 Effort Variance trend

Schedule Variance [SV]: This metrics tells us about percent deviation between number of

days estimated for completion of a process and actual time taken for the process completion.

SV = [actual no of days– estimated no of days]/estimated no of days]*100

Table 9 Schedule variance values

Project name Actual time Estimated time Schedule

variance Project 1 119 111 +6.3

Project 2 153 165 -4.9

Project 3 100 95 +5

Project 4 200 215 -7.5

Figure 16 Schedule Variance trend

6

-7.1 -5.6

4.5

-8

-6

-4

-2

0

2

4

6

8

project 1 project 2 project 3 project 4

Eff

ort

Va

ria

nce

6.3

-4.9

5

-7.5 -10

-8

-6

-4

-2

0

2

4

6

8

project 1 project 2 project 3 project 4

Effo

rt V

aria

nce

Page 13: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Monika, Pradeep Kumar Bhatia

http://www.iaeme.com/IJCET/index.asp 120 [email protected]

Scope variance (SC): This metrics tells us about percent deviation of stability of previous

releases.

SC = [(Total scope– Previous scope)/Previous scope]*100 Where

Total scope = Previous scope + New Scope, if Scope inc

Total scope = Previous scope - New Scope, if Scope dec

Defect aging: This metric gives an indication of average time taken to fix an identified

defect. Its unit can be either days or hours.

Defect aging = difference in time created and time resolved

Defect fix retest : It evaluates the efficiency of retesting previously fixed defects if they are

working properly

Current quality ratio: This metrics tell us about efficient functionality of the process.

Current quality ratio = total no of test cases without any defects/total number of test

procedures

Error discovery rate:

Error discovery rate = Total number of defects identified/ number of test processes executed

Defect density: This metrics is calculated in terms of line of codes of the software [12]. It can

be defined as the no of defects identified per thousand lines of code (KLOC).

Defect density = Total number of defects/KLOC

OR

Total number of defects/no of modules (components)

Quality of fixes: This metric evaluates the effects of software fixing on functionality of

previously working processes whether function is adversely affected or no effect.

Quality of fixes = Total no of defects reopened/ total no of defects fixed

Defect removal efficiency / defect gap analysis: This Metric quantifies the ability of

development team to remove an identified valid defect [13, 14] and preventing it from

reaching to customer.

Defect removal efficiency = [total number of valid defects resolved by testing team/(total

defects identified during TC – total no of invalid defects)]*100

For example in a testing cycle, 200 defects were observed by development experts, out of

which 40 were found to be invalid defects. Expertise was able to resolve 120 defects out of

remaining ones. So, Defect removal efficiency will be [120/(200 – 40)]*100 = 75%.

Problem Reports: It quantifies prioritized problems of the software i.e. some problems needs

to be resolved quickly whereas some can be ignored for the time being.

4.2. Testing Metrics Classification II

In another way, metrics can also be categorised as project, process and product. These are

Project Metrics

Product Metrics

Process Metrics

Page 14: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Impact of Software Testing Metrics on Software Measurement

http://www.iaeme.com/IJCET/index.asp 121 [email protected]

4.2.1. Project Metrics

Software project metrics are used to adapt project workflow and technical activities, to avoid

development schedule delays, to mitigate potential risks, and to assess product quality on an

on-going basis. These include Reliability, Availability, Usability; Customer reported defects,

Productivity, Rate of Requirements Change, Estimation Accuracy, Cost, Scope, Risk etc.

Metrics not defined before are described below:-

Requirement stability index : This metric evaluates how change in magnitude and impact of

requirements affects a project

RSI = [(no of changed + no of deleted+ no of added)/ total no of initial requirements)]* 100

Customer reported defects: These are defined as defect reports per customer-month

Schedule variance for a phase: It is defined as deviation between planned and actual

schedule for the phases within a project.

SV for a phase = actual calendar days for a phase – estimated calendar daysfor a phase +

start variance for a phase]/estimated calendar days for a phase]*100

Effort variance for a phase: It is defined as deviation between planned and actual effort for

the various phases within a project.

EV for a phase= actual effort for a phase – estimated effort for a phase]/estimated effort

for a phase]*100

Schedule variance

Size variance

Productivity (for test case preparation)

Productivity (for test case execution)

Productivity (defect fixation)

Productivity (defect detection)

4.2.2. Process Metrics

These metrics provide us an indication whether the process appears to be “working normally”

or not. These metrics Allows making changes while there is still a chance to have an impact

on the project these are useful during the development and maintenance process to identify

problems and areas for improvement. These include Reliability growth pattern, Pattern of

defects found (arrivals) during testing, responsiveness of fixing, fix quality etc. Metrics not

defined above are described below:-

Cost of quality: It is a measure in terms of money for quality performance in organization.

Cost of quality = (review + testing = verification review + verification testin + QA

+configuration management + management + training +rework review + rework testing)/

total effort*100

Cost of poor quality: It is defined as the cost of implementing imperfect processes and

products.

Cost of poor quality = rework effort/total effort * 100

Test cases Passed: This metric determines how many test case are passed out of total

executed test cases in percentage value.

Test cases Passed = (No. of Test cases Passed / Total no. of Test cases Executed) * 100.

Page 15: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Monika, Pradeep Kumar Bhatia

http://www.iaeme.com/IJCET/index.asp 122 [email protected]

Test cases Failed: This metric determines how many test case are failed out of total executed

test cases in percentage value.

Test cases Failed = (No. of Test cases Failed / Total no. of Test cases Executed) * 100.

Test cases Blocked: This metric determines how many test case are blocked out of total

executed test cases in percentage value.

Test cases Blocked = (No. of Test cases Blocked / Total no. of Test cases Executed) *

100.

Test Efficiency

Test Design Coverage

Test Execution Productivity

Test Case Preparation Productivity

Review Efficiency

Test Coverage productivity

4.2.3. Product Metrics

Software product metrics describes project team’s ability, to perform project execution

example. These include Reliability, Availability, Usability, Customer reported defects,

Productivity Etc. Metrics not defined before are described below:-

Defect leakage: this is used to review the efficiency of testing process before UAT. Defect

Leakage is the Metric which is used to identify the efficiency of the QA testing i.e., how

many defects are missed / slipped during the QA testing.

Defect Leakage = (No. of Defects found in UAT / No. of Defects found in QA testing.) *

100

Suppose, During Development & QA testing, we have identified 100 defects. After the

QA testing, during Alpha &

Beta testing, end user / client identified 40 defects, which could have been identified

during QA testing phase.

Review efficiency: It is defined as the efficiency in harnessing/ detecting review defects in

the verification stage. Review efficiency = (number of defects caught in review)/ total number

of defects caught) x 100

Residual defect density = (total number of defects found by customer)/ (Total number of

defects including customer found defects) * 100.

Defect Removal Effectiveness

Quality Of Fixes

Error Discovery Rate

Defect Density

Defect Removal Efficiency

Performance Execution Data Client Side

Performance Execution Data Server Side

Page 16: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Impact of Software Testing Metrics on Software Measurement

http://www.iaeme.com/IJCET/index.asp 123 [email protected]

5. TESTING METRICS IN THE VIEW OF PRODUCT, PROCESS AND

PROJECT IN ADDITION OF THE MANUAL, AUTOMATION,

PERFORMANCE AND COMMON TESTING METRICS

We have seen the classification of the testing metrics in two ways. In this step, we combined

these two views. Firstly, we classify the considered metrics into project, process and product.

Secondly, we identify the manual, automation, performance and manual metrics in each of

the previous classification.

Table 10 Testing Metrics in the view of process, project, product in addition of common, manual,

automation and performance

PROJECT METRICS

COMMON TESTING

METRICS

Schedule Variance

Effort Variance

Size Variance

Requirement Stability Index

Schedule Variance For A Phase

Effort Variance For A Phase

MANUAL TESTING

METRICS

Productivity (For Test Case Preparation)

Productivity (For Test Case Execution)

Productivity (Defect Fixation)

Productivity (Defect Detection)

PROCESS METRICS

AUTOMATION TESTING

METRICS

Cost of Poor Quality

Cost of Quality

MANUAL TESTING

METRICS

Test Efficiency

Test Design Coverage

Test Execution Productivity

Test Case Preparation Productivity

PERFORMANCE TESTING

METRICS

Review Efficiency

Test Cases Passed

Test Cases Failed

Test Cases Blocked

Test Coverage Productivity

COMMON

TESTING METRICS

Residual Defect Density

Defect Removal Effectiveness

Quality of Fixes

PRODUCT METRICS

COMMON TESTING

METRICS

Error Discovery Rate

Defect Density

Defect Leakage

Defect Removal Efficiency

PERFORMANCE TESTING

METRICS

Performance Execution Data Client Side

Performance Execution Data Server Side

In the table 10, we can see that the process metrics does have all four types metrics i.e.

manual, automation, performance and common metrics. The percentage of said metrics is

shown with the help of pie chart in the fig. 17.

Page 17: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Monika, Pradeep Kumar Bhatia

http://www.iaeme.com/IJCET/index.asp 124 [email protected]

Figure 17 Distribution of various types of process metrics

The project metrics does have only manual and automation, testing metrics. The

percentage of said metrics is shown with the help of pie chart in the fig. 18.

Figure 18 Distribution of various types of project metrics

Figure 19 Distribution of various types of process metrics

Figure 20 Distribution of various types of project metrics

manual

29%

automation

17%

performa

nce

36%

common

21%

PROCESS METRICS

manual

40% Common

60%

PROJECT METRICS

16%

8%

33%

42%

0%

10%

20%

30%

40%

50%

automation common manual performance

PROCESS METRICS

60%

40%

0%

20%

40%

60%

80%

Common Manual

PROJECT METRICS

Page 18: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Impact of Software Testing Metrics on Software Measurement

http://www.iaeme.com/IJCET/index.asp 125 [email protected]

The product metrics does have common and performance testing metrics. The percentage

of said metrics is shown with the help of pie chart in the fig. 21.

Figure 21 Distribution of various types of prduct metrics

6. CONCLUSIONS

The testing metrics is a foundation to the testing effort estimation. If we consider appropriate

testing metrics to measure the effort, then it will reduce the chance to failure. The automation

testing need to be preferred only if, it has chances to decrease the manual efforts. The process

metrics include the all types of metric. But, it doesn’t mean that we ignored the impact of the

project and product metrics. The balanced between all types of metrics only gives the best

effect. We can explore testing metrics further to find the optimal balance between metrics.

REFERENCES

[1] R. M. Sharma. Quantitative Analysis of Automation and Manual Testing. International

Journal of Engineering and Innovative Technology (IJEIT) Volume 4, Issue 1, July 2014.

ISSN: 2277-3754. Pp 252-257

[2] Premal B Nirpal, Karbhari Kale. A Brief Overview of Software Testing Metrics.

International Journal on Computer Science and Engineering (IJCSE). Vol. 3 No. 1 Jan

2011. ISSN : 0975-3397. Pp 204-211

[3] Thapar N. Mishra K. Anil. Review on Software Testing Metrics. www.ijecs.in.

International Journal of Engineering And Computer Science. ISSN: 2319-7242 Volume 3

Issue 6 June, 2014. Page No. 6742-6745

[4] Höfer, A. and Tichy, W. F. (2007). ,”Status of empirical research in software

engineering.” In Empirical Software Engineering Issues, volume 4336/2007, pages 10–19.

Springer.

Performanc

e

33%

Common

60%

PRODUCT METRICS

67%

33%

0%

20%

40%

60%

80%

common performance

PRODUCT METRICS

Page 19: IMPACT OF SOFTWARE TESTING METRICS ON SOFTWARE …€¦ · Software metric characterizes a software system and places them at a specific position on a software quantitative scale

Monika, Pradeep Kumar Bhatia

http://www.iaeme.com/IJCET/index.asp 126 [email protected]

[5] Aggarwal, K.K and Singh, Yogesh “Software Engineering Programs Documentation

Operating Procedures (Second Edition)” New Age International Publishers, 2005.

[6] Quadri, S.M.K and Farooq, SU, “Software Testing – Goals, Principles, and Limitations”,

International Journal of Computer Applications (0975 – 8887) Volume 6– No.9,

September 2010.

[7] Munson, J. C. (2003). “Software Engineering Measurement”. CRC Press, Inc., Boca

Raton, FL, USA.

[8] Maurya V. N. Kumar Rajender “ Analytical Study on Manual vs. Automated Testing

Using with Simplistic Cost Model” International Journal of Electronics and Electrical

Engineering ISSN : 2277-7040 Volume 2 Issue 1 (January 2012)pp.24-35.

[9] Premal B. Nirpal, Premal B. Nirpal “A Brief Overview Of Software Testing Metrics”

International Journal on Computer Science and Engineering (IJCSE) ISSN: 0975-3397

Vol. 3 No. 1 Jan 2011 pp. 204-211.

[10] Riaz S.S. Ahamed “Studying The Feasibility and Importance of Software Testing: An

Analysis” International Journal of Engineering Science and Technology Vol.1 (3), 2009,

ISSN: 0975-5462 pp.119-128.

[11] Sheikh Umar Farooq, S. M. K. Quadri, Nesar Ahmad, “Software Measurements and

Metrics: Role in Effective Software Testing” International Journal of Engineering Science

and Technology (IJEST) ISSN: 0975-5462 Vol. 3 No. 1 Jan 2011 pp 671-680.

[12] http://www.softwaretestinghelp.com/defect-density/

[13] http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470404159.html

[14] http://www.equinox.co.nz/blog/software-testing-metrics-defects-removal-efficiency/

[15] Arvinder Kaur, Bharti Suri, Ms. Abhilasha Sharma,” Software Testing Product Metrics -A

Survey”, Proceeding of National Conference on Challenges & Opportunities in

Information Technology (COIT-2007), RIMT-IET, Mandi Gobindgarh. March 23, 2007.

[16] Nagaraju Mamillapally, Trivikram Mulukutla, Anitha Bojja, A Comparative Study Of

Redesigned Web Site Based On Complexity Metrics. International Journal of Computer

Engineering and Technology (IJCET), 4 (3), 2013, pp. 353–358

[17] Sanjay Kumar Yadav, Madhavi Singh, Diwakar Singh, A Framework For Efficient

Routing Protocol Metrics For Wireless Mesh Network, International Journal of Computer

Engineering and Technology (IJCET), 4 (5), 2013, pp. 01–08

[18] Dr. Leena Jain, Satinderjit Singh, A Journey From Cognitive Metrics To Cognitive

Computers, International Journal of Advanced Research in Engineering and Technology

(IJARET), 4 (4), 2013, pp. 60–66