software quality metrics methodology _tanmi kiran
DESCRIPTION
TRANSCRIPT
Software Quality Metrics Methodology
Kiran Lata GangwarTanmi KapoorM.Tech (S.E.)
Research Paper
IEEE Standard for a Software Quality Metrics Methodology
Sponsor
Software Engineering Standards Committee of the IEEE Computer Society
Approved 8 December 1998
IEEE-SA Standards Board
IEEE Software Metrics Methodology
1.Establish software quality requirements
1.1 Identify a list of possible quality requirements
Use, organizational experience required standards regulations lawsConsider, contractual requirements cost or schedule constraints warranties customer metrics requirements organizational self-interest
1.2 Determine the list of quality requirements:
Survey all involved parties Create the list of quality requirements
1.3 Quantify each quality factor: For each quality factor, assign one or more direct metrics to
represent the quality factor assign direct metric values to serve as quantitative
requirements for that quality factor
2.Identify software quality metrics
3. Implement the software quality metrics
3.1 Define the data collection procedures
Identify tools Describe data storage procedures Establish a traceability matrix Identify the organizational entities Participate in data collection Responsible for monitoring data collection Describe the training and experience required for data
collection Training process for personnel involved
3.2 Prototype the measurement process
Test the data collection and metrics computation procedures on selected software that will act as a prototype
Select samples that are similar to the project(s) on which the metrics will be used
Examine the cost of the measurement process for the prototype to verify or improve the cost analysis
Results collected from the prototype to improve the metric descriptions and descriptions of data items
3.3 Collect the data and compute the metric values
Using the formats in Table, collect and store data in the project metrics database at the appropriate time in the life cycle
Check the data for accuracy and proper unit of measure Monitor the data collection Check for uniformity of data if more than one person is
collecting it Compute the metric values from the collected data
4.Analyze the software metrics results
4.1 Interpret the results
Interpret and record the results
Analyze the differences between the collected metric data and the target values
Investigate significant differences
4.2 Identify software quality
Interpret and record the results
Unacceptable quality may be manifested as
excessive complexity, inadequate documentation, lack of traceability, or other undesirable attributes
4.3 Make software quality predictions
Use validated metrics during development to make predictions of direct metric values
Make predictions for software components and process steps Analyze in detail software components and process steps
whose predicted direct metric values deviate from the target values
4.4 Ensure compliance with requirements
Use direct metrics to ensure compliance of software products with quality requirements during system and acceptance testing
Use direct metrics for software components and process steps. Compare these metric values with target values of the direct metrics
Classify software components and process steps whose measurements deviate from the target values as noncompliant
5 .Validate the software quality metrics
5.1 Apply the validation methodology
The purpose of metrics validation is to identify both product and process metrics that can predict specified quality factor values, which are quantitative representations of quality requirements
Metrics shall indicate whether quality requirements have been achieved or are likely to be achieved in the future
5.2 Apply validity criteria
For the purpose of assessing whether a metric is valid The following thresholds shall be designated:
V-square of the linear correlation coefficientB-rank correlation coefficientA-prediction error@-confidence levelP-success rate
a) Correlation
b) Tracking
c) Consistency
d) Predictability
e) Discriminative power
f) Reliability
5.3 Validation procedure
5.3.1 Identify the quality factors sample
A sample of quality factors shall be drawn from the metrics database
5.3.2 Identify the metrics sample
A sample from the same domain (e.g., same software components), as used in 5.3.1, shall be drawn from the metrics database
5.3.3 Perform a statistical analysis The analysis described in 5.2 shall be performed Before a metric is used to evaluate the quality of a product or
process, it shall be validated against the criteria described in 5.2. If a metric does not pass all of the validity tests, it shall only be used according to the criteria prescribed by those tests
5.3.4 Document the results
Documented results shall include the direct metric, predictive metric, validation criteria, and numerical results, as a minimum
5.3.5 Revalidate the metricsA validated metric may not necessarily be valid in other environments or future applications. Therefore, a predictive metric shall be revalidated before it is used for another environment or application
5.3.6 Evaluate the stability of the environmentMetrics validation shall be undertaken in a stable development environment (i.e., where the design language, implementation language, or project development tools do not change over the life of the project in which validation is performed)