software quality management lecture notes

106
A.V.C.COLLEGE OF ENGINEERING MANNAMPANDAL, MAYILADUTHURAI-609 305 COURSE MATERIAL FOR THE SUBJECT OF SOFTWARE QUALITY MANAGEMENT S UB NAME : SOFTWARE QUALITY MANAGEMENT SE M : VIII D E P AR T M E NT : COMPUTER SCIENCE AND ENGINEERING ACAD E MIC Y E A R : 2012- 2013

Upload: avc-college-of-engineering

Post on 11-May-2015

9.416 views

Category:

Education


5 download

DESCRIPTION

This document is very useful for student and faculty members

TRANSCRIPT

Page 1: Software quality management lecture notes

A.V.C.COLLEGE OF ENGINEERING

MANNAMPANDAL, MAYILADUTHURAI-609 305

COURSE MATERIAL

FOR THE SUBJECT OF

SOFTWARE QUALITY MANAGEMENT

S UB NAME : SOFTWARE QUALITY MANAGEMENT

SE M : VIII

D E P AR T M E NT : COMPUTER SCIENCE AND ENGINEERING

ACAD E MIC Y E A R : 2012-2013

NAME O F TH E F AC U LT Y : PARVATHI.M

D ES I G NA T I O N : Asst.Professor

Page 2: Software quality management lecture notes

A.V.C.COLLEGE OF ENGINEERING

MANNAMPANDAL, MAYILADUTHURAI-609 305

Syllabus

CS1023 – SOFTWARE QUALITY MANAGEMENT L T P C3 0 0 3

UNIT I FUNDAMENTALS OF SOFTWARE QUALITY 9Software quality − Hierarchical models of Boehm and McCall − Quality measurement− Metrics measurement and analysis − Gilb’s approach − GQM Model.

UNIT II SOFTWARE QUALITY ASSURANCE 9Quality tasks − SQA plan − Teams − Characteristics − Implementation −Documentation − Reviews and audits.

UNIT III QUALITY CONTROL AND RELIABILITY 9Tools for quality − Ishikawa’s basic tools − CASE tools − Defect prevention andremoval − Reliability models − Rayleigh model − Reliability growth models for quality assessment.

UNIT IV QUALITY MANAGEMENT SYSTEM 9Elements of QMS − Rayleigh model framework − Reliability growth models forQMS − Complexity metrics and models − Customer satisfaction analysis.

UNIT V QUALITY STANDARDS 9Need for standards − ISO 9000 Series − ISO 9000-3 for software development −CMM and CMMI − Six sigma concepts.

Total: 45TEXT BOOKS1. Allan C. Gillies, “Software Quality: Theory and Management”, ThomsonLearning, 2003.

Page 3: Software quality management lecture notes

2. Stephen H. Kan, “Metrics and Models in Software Quality Engineering”, Pearson Education (Singapore) Pvt. Ltd., 2002.REFERENCES1. Norman E. Fenton and Shari Lawrence Pfleeger, “Software Metrics”, Thomson, 2003.2. Mordechai Ben, Menachem and Garry S.Marliss, “Software Quality” Thomson Asia Pvt. Ltd., 2003.3. ISO 9000-3, “Notes for the Application of the ISO 9001 Standard to softwaredevelopment”.4. Kamna Malik and Praveen Choudry, “Software Quality : A PractitionerApproach”, PHI, 2000

Page 4: Software quality management lecture notes

UNIT 1 FUNDAMENTALS OF SOFTWARE QUALITY

Contents:· Views of Quality· Hierarchical Modeling· Boehm and McCall’s Models· Quality Criteria· Interrelation· Measuring Quality· Quality Metrics· Overall Measures of Quality

Quality: Ability of the product/service to fulfill its function Hard to define Impossible to measure Easy to recognize in its absence Transparent when present

Definition of Quality:

Ch aracte ristics of Qu ality:

Quality is not absolute

Quality is multidimensional

Quality is subject to constraintsQuality is about acceptable compromisesQuality criteria are not independent, but interact with each other causing

conflicts.

Page 5: Software quality management lecture notes

S o f t w a r e Qu ality: Kitchen ham (1989 b) refers to software quality “fitness for needs” and claims qualityinvolves matching expectations.

Two features of a piece of quality software:

-> Conformance to its specification-> Fitness for its intended purpose.

The Department of Defense (DOD, 1985) in the USA defines software quality as “the degree to which the attributes of the software enable it to perform its intended end use”.

Software product quality

Product quality o conformance to requirements or program specification; related to Reliability

Scalability o Correctness

Completeness absence of bugs Fault-tolerance

o Extensibility o Maintainability

Documentation

Software was particularly problematical for the following reasons:

-> Software has no physical existence-> The lack of knowledge of client needs at the start-> The change of client needs over time-> The rapid rate of change on both hardware and software-> The high expectations of customers, particularly with respect to adaptability.

Within the software quality area, the need to provide a solution that matches user needs is often considered as “design quality”, whilst ensuring a match to the specification is considered as “manufacturing quality”.

Software Quality Factors

A software quality factor is a non-functional requirement for a software program which is not called up by the customer's contract, but nevertheless is a desirable requirement which enhances the quality of the software program.

Page 6: Software quality management lecture notes

UnderstandabilityCompletenessMaintainabilityConcisenessPortabilityConsistencyTestabilityUsabilityReliabilityStructured nessEfficiencySecurity

View s of Qu ality:

Quality is a multidimensional construct. It may therefore be considered using a polyhedron metaphor. Within this metaphor, a three-dimensional solid represents quality. Each face represents a different aspect of quality such as correctness, reliability, and efficiency.

It has been classified according to a number of ‘views’ or perspective. These views are often diverse and may conflict with each other. Each view comes from a particular context.The views are generally presented in adversarial pairs such as versus designers.

The software project has the following roles

Project manager Business analyst Implementation programmer Quality auditor

End user

Line manager Project sponsor

Page 7: Software quality management lecture notes

In an attempt to classify different and conflicting views of quality, Garvin (1984) has suggested five different views of quality

1. The transcendent view· Innate excellence· Classical definition

2. The product-based view· Higher the quality higher the cost· Greater functionality· Greater care in development

3. The user-based view· Fitness for purpose· Very hard to quantify

4. The manufacturing view· Measures quality in terms of conformance· Zero defects

5. The value-based view· Provides the data with what the customer requires at a price.

Quality is determined by people because

· It is people and human organizations who have problems to be solved by computer software· It is people who define the problems and specify the solutions· It is still currently people who implement designs and product code.· It is people who test code

Page 8: Software quality management lecture notes

H IE RARCH ICAL MO DE L O F Q UAL IT Y:

To compare quality in different situations, both qualitatively and quantitatively, it is necessary to establish a model of quality.

Many model suggested for quality. Most are hierarchical in nature.A quantitative assessment is generally made, along with a more quantified assessment.

Two principal models of this type, one by Boehm (1978) andone by McCall in 1977. A hierarchical model of software quality is based upon a set of quality criteria, each of which has a set of measures or metrics associated with it.

Th e issu es relatin g to th e crit eria of qu ality are:

-> What criteria of quality should be employed?-> How do they inter-relate?-> How may the associated metrics be combined into a meaningful overall

measure of Quality?

TH E H IE RARCH ICAL MO DEL S O F BOE HM AND M CCAL L

TH E GE MO DE L (MCCALL , 1977 AND 1980 ) / (McCall Mod el)

-> This model was first proposed by McCall in 1977.

-> It was later revised as the MQ model, and it is aimed by system developers to be used during the development process.

-> In early attempt to bridge the gap between users and developers, the criteria were chosen in an attempt to reflect user’ s views as well as developer’ s priorities.

-> The criteria appear to be technically oriented, but they are described by a series of questions which define them in terms to non specialist managers.

Th e th ree ar eas add ressed b y McCall’ s mod el (1977) :

Product operation: requires that it can be learnt easily, operated efficiently And it results are those required by the users.

Page 9: Software quality management lecture notes

Product revision: it is concerned with error correction andAdaptation Of the system and it is most costly part of software development.

Product transition: it is an important application and it isdistributed processing and the rapid rate of change in hardware is Likely to increase.

M c Call’ s c r it er ia of qu ality d e f i n e d Efficiency is concerned with the use of resources e.g.processor time, storage. It falls into two categories:execution efficiency and storage efficiency. Usability is the ease of use of the software. Integrity is the protection of the program from unauthorized access. Correctness is the extent to which a program fulfils its specification. Reliability is its ability not to fail. Maintainability is the effort required to locate and fix a fault in the program within its operating environment. Flexibility is the ease of making changes required by changes in the operating environment. Testability is the ease of testing the programs, to ensure that it is error-free and meet its specification. Portability is the effort required to transfer a program from one environment to another. Reusability is the ease of refusing software in a different context. Interoperability is the effort required to couple the system to another system.

Page 10: Software quality management lecture notes

Th e B oeh m mod el (197 8)

It is to provide a set of well-defined, well-differentiated characteristics of software quality.

It is hierarchical in nature but the hierarchy is extended, so that quality criteria are subdivided. According to the uses made of the system and they are classed into ‘ general’ or‘ as is’ and the utilities are a subtype of the general utilities, to the product operation. There are two levels of actual quality criteria, the intermediate level being further split into primitive characteristics which are amenable to measurement. This model is based upon a much larger set of criteria than McCall’ s model, but retains the same emphasis on technical criteria.

Th e t w o m o d e ls s h a r e a nu m b e r of c o m m on c h a r a c t e r isti c s a r e , The quality criteria are supposedly based upon the user’s

view. The models focus on the parts that designers can more

readily analyze. Hierarchical models cannot be tested or validated. It

cannot be shown that the metrics accurately reflect the criteria.

The measurement of overall quality is achieved by a weighted summation of the characteristics.

Boehm talks of modifiability where McCall distinguishes expandability from adaptability and documentation, understandability and clarity.

HO W TH E Q UAL IT Y CRITE RIA INTE RREL AT E

The individual measure of software quality provided do not provide an over all measure of software quality. The individual measures must be combined. The individual measures of quality may conflict with each other.

Some of these relationships are described below;

Integrity vs. efficiency (inverse) the control of access to data or software requires additional code and processing leading to a longer runtime and additional storage requirement. Usability vs. efficiency (inverse) Improvements in the human / computer interface may significantly increase the amount of code and power required. Maintainability and testability vs. efficiency (inverse) Optimized and compact code is

Page 11: Software quality management lecture notes

not easy to maintain. Portability vs. efficiency (inverse) the use of optimized software or system utilities will lead to decrease in probability.

Flexibility and reusability vs. integrity (inverse) the general flexible data structures required for flexible and reusable software increase the security and protection problem. Interoperability vs. integrity (inverse) Coupled system allow more avenues of access to more and different users. Reusability vs. reliability (inverse) reusable software is required to be general:maintaining accuracy and error tolerance across all cases is difficult. Maintainability vs. flexibility (direct) maintainable code arises from code that is well structured. Maintainability vs. reusability (direct) well structured easily maintainable code is easier to reuse in other programs either as a library of routines or as code placed directly within another program. Portability vs. reusability (direct) portable code is likely to be free of environment- specific features. Correctness vs. efficiency (neutral) the correctness of code, i.e. its conformance to specification does not influence its efficiency.

ME AS URING S O FT W ARE Q UAL IT Y

ME AS URING Q U AL IT Y

Quality measurement, where it is considered at all, is usually expressed in terms of metrics.Software metric is a measurable property which is an indicator of one or more of the quality criteria that we are seeking to measure. As such, there are a number of conditions that a quality metric must meet. It must:

-> Be clearly linked to the quality criterion that it seeks to measure-> Be sensitive to the different degrees of the criterion-> Provide objective determination of the criterion that can be mapped onto a

suitable scale.-> Metrics are not same as direct measures.

Measurement techniques applied to software are more akin to the social sciences, where properties are similarly complex and ambiguous. A typically measurable property on which a metric may be based is structured ness. Structured ness as it simplest may be calculated in terms of the average length of code modules within the programs.

Page 12: Software quality management lecture notes

SOFTWARE METRICS

Metrics are classified into two types according to whether they are predictive or descriptive. A predictive metric is used to make predictions about the software later in the lifecycle. Structured ness is used to predict the maintainability of the software product in use. A descriptive metric describes the state of the software at the time of measurement. Different authors have taken different approaches to metrics. Structured ness is measured by questions such as:

-> Have the rules for transfer of control between modules been followed?(y/n)-> Are modules limited in size?(y/n)-> Do all modules have only one exit point ?(y/n)-> Do all modules have only one entry point?(y/n)-> A well-structured program will produce positive answers to such

questions. McCall’s approach is more quantities, using scores derived from equations such as

Where: n01 = no of modules containing one or zero exit points only ntot = total number of modules Generally, in this approach, scores are normalized to arange between 0 and 1, to allow for easier combination and comparison. This appears attractive, to give unjustified credibility to the results obtained. To validate this relationship and determine whether it is a linear relationship or more complex in nature.

It is also possible to validate whether the dependence of maintainability structured ness in identical to that of adaptability or reusability.

Wh at mak es a good me tric?

Seven criteria for a good metric, after Watts (1987)

Objectivity the results should be free from subjective influences. It must not matter who the measurer is. Reliability the results should be precise and repeatable. Validity the metric must measure the correct characteristic. Standardization the metric must be unambiguous and allow for comparison.

Page 13: Software quality management lecture notes

Comparability the metric must be comparable with other measures of the sameCriterion. Economy the simpler and therefore, the cheaper the measure is to use, the Better. Usefulness the measure must address a need, not simply measure a property for its own sake. A further important feature is consistency. Automation is also desirable. Metrics cited in the literature:Metrics available for each criterion (after Watts, 1987) \

The metrics cited depends to a very large extent upon just seven distinct measurable properties: readability, error prediction, error detection, complexity, and mean time to failure (MTTF), modularity, testability.1. Readability as a measure of usability may be applied to documentation in order to assess how such documentation may assist in the usability of a piece of software.

2. Error prediction as a measure of correctness this measure is depends upon the stable software development environment.

3. Error detection as measure of correctness

Page 14: Software quality management lecture notes

4. Mean time to failure (MTTF) as a measure of reliability

5. Complexity as a measure of reliability the assumption underpinning these measures is that as complexity increases, so reliability decrease.

6. Complexity as a measure of maintainability is also indicative of maintainability.

7. Readability of code as a measure of maintainability has also been suggested as a measure of maintainability.

8. Modularity as a measure of maintainability increased modularity is generally assumed to increase maintainability. Four measures have been suggested. Yau and Collofello (1979) measured “ stability” as the number of modules affected by program modification. Kentger (1981) defined a four-level hierarchy of module types:

Control modules. Problem-oriented modules. Management modules for abstract data. Realization modules for abstract data.

9. Testability as a measure of maintainability the ease and effectiveness of testing will have an impact upon the maintainability of a product.

An overall measu re of qu ality

Much of the work in this area has been concerned with simple reduction of a set of scores to a single ‘figure-of-merit’.

Five such methods are detailed by Watts (1987) as part of the MQ approach.

1. Simple scoring: In this method, each criterion is allocated a score. The overall quality is given by the mean of the individual scores.2. Weighted scoring: This scheme allows the user to weight each criterion according to how important they consider them to be. Each criterion is evaluated to produce a score between 0 and 1. Each score is weighted before summation and the resulting figure reflects the relative importance if the different factors.3. Phased weighting factor method: This is an extension of weighted scoring. A weighting is assigned to a group characteristics before each individual weighting is considered.

Page 15: Software quality management lecture notes

The phased weighting factor method

Product operation weighted mean = 0.660

Product transition weighted mean = 0.633

Overall measure by PWF method = ((2/3) x0.660) + ((1/3) x0.633) = 0.65

4. The Kepner- Tregoe method (1981): The criteria are divided into ‘ essential’ and ‘ desirable’ . A minimum value is specified for each essential criterion and any software failing to reach these scores is designated unsuitable.‘ Suitable’ software is then judged by use of the weighting factor method.5. The Cologne combination method (Schmitz, 1975): This method is designed with comparative evaluation is mind. Using the chosen criteria, each product is ranked in order.

POLARITY PROFILING:

In this scheme, quality is represented by series of ranges from -3 to +3. The required quality may be represented and compared to the actual quality achieved. It is a common problem amongst software developers that they focus upon particular

Page 16: Software quality management lecture notes

aspects of quality.

Page 17: Software quality management lecture notes

When a user complains of poor quality, they tend to improve the product further in these areas. Often the product has already exceeded the user’ s expectations in these areas, and a further improvement does not improve their overall view of the quality of the product. This effort wasted. Worse, the user’s needs still have not been met in other critical areas, leading to tensions between the developers and users. Two different outcomes result. In the first case, usability is improved. Unfortunately, reliability and efficiency are still not up to the required standard, and usability was already considered satisfactory. In the second case, improvements in reliability and efficiency are traded for a reduction in adaptability and maintainability, perhaps by‘ tweaking’ the code. The consequence is that all criteria are now at the required level, resulting in an overall perception of quality and satisfied users.

The purpose of standard

Standards are generally defined in terms of a model of a best practice against which all others are compared.

It is used to build better products Also used to ensure whether the products are conformance to a standard

Types of accreditation

Type Description

First party internal monitoring

Second party external monitoring by a customer

Third party external monitoring by an independent standards body

Benefits of accreditation

It provides external validation to see whether the investment made in the QMS is being effective

It gives credibility to the supplier and their quality system It allows the supplier to sell to those customers who insist on

accreditation as a condition of tender It qualifies the supplier to be included in the buyers guides

Page 18: Software quality management lecture notes

THE GOAL QUESTION METRIC APPROACH

The Goal Question Metric (GQM) approach is based upon the assumption that for anorganization to measure in a purposeful way it must first specify the goals for itself and its projects, then it must trace those goals to the data that are intended to define those goals operationally, and finally provide a framework for interpreting the data with respect to the stated goals. Thus it is important to make clear, at least in general terms, what informational needs the organization has, so that these needs for information can be quantified whenever possible, and the quantified information can be analyzed a to whether or not the goals are achieved. The approach was originally defined for evaluating defects for a set of projects in the NASA Goddard Space Flight Center environment.

The result of the application of the Goal Question Metric approach application is the specification of a measurement system targeting a particular set of issues and a set of rules for the interpretation of the measurement data. The resulting measurement model has three levels:1. Conceptual level (GOAL): A goal is defined for an object, for a variety of reasons,with respect to various models of quality, from various points of view, relative to aparticular environment. Objects of measurement areProducts: Artifacts, deliverables and documents that are produced duringthe system life cycle; E.g., specifications, designs, programs, test suites.Processes: Software related activities normally associated with time; E.g.,specifying, designing, testing, interviewing.Resources: Items used by processes in order to produce their outputs; E.g.,personnel, hardware, software, office space.

2. Operational level (QUESTION): A set of questions is used to characterize the waythe assessment/achievement of a specific goal is going to be performed based onsome characterizing model. Questions try to characterize the object ofmeasurement (product, process, resource) with respect to a selected quality issueand to determine its quality from the selected viewpoint.

3. Quantitative level (METRIC): A set of data is associated with every question inorder to answer it in a quantitative way. The data can beObjective: If they depend only on the object that is being measured and noton the viewpoint from which they are taken; E.g., number of versions of adocument, staff hours spent on a task, size of a program.Subjective: If they depend on both the object that is being measured andthe viewpoint from which they are taken; E.g., readability of a text, level ofuser satisfaction.

Page 19: Software quality management lecture notes

The complete Goal Question Metric model is as follows:

THE GOAL QUESTION METRIC PROCESSA GQM model is developed by identifying a set of quality and/or productivity goals, atcorporate, division or project level; e.g., customer satisfaction, on-time delivery, improved performance. From those goals and based upon models of the object of measurement, we derive questions that define those goals as completely as possible. For example, if it is to characterize a software system (e.g., an electronic mail package, a word processor) with respect to a certain set of quality issues (e.g., portability across architectures), then a quality model of the product must be chosen that deals with those issues (e.g., list of functional features that can be implemented in different architectures). The next step consists in specifying the measures that need to be collected in order to answer those questions, and to

Page 20: Software quality management lecture notes

track the conformance of products and processes to the goals. After the measures have been specified, we need to develop the data collection mechanisms, including validation and analysis mechanisms.

CONCLUSIONIn summary, the Goal Question Metric approach is a mechanism for defining and interpreting operational and measurable software. It can be used in isolation or, better, within the context of a more general approach to software quality improvement.

Figure below outlines the basic roles and flows of information for this model.

Page 21: Software quality management lecture notes

UNIT II SOFTWARE QUALITY ASSURANCE

Software Quality Assurance (SQA) consists of a means of monitoring the software engineering processes and methods used to ensure quality. It does this by means of audits of the quality management system under which the software system is created. These audits are backed by one or more standards, usually ISO 9000.

It is distinct from software quality control which includes reviewing requirements documents, and software testing. SQA encompasses the entire software development process, which includes processes such as software design, coding, source code control, code reviews, change management, configuration management, and release management. Whereas software quality control is a control of products, software quality assurance is a control of processes.

Software quality assurance is related to the practice of quality assurance in product manufacturing. There are, however, some notable differences between software and a manufactured product. These differences stem from the fact that the manufactured product is physical and can be seen whereas the software product is not visible. Therefore its function, benefit and costs are not as easily measured. What's more, when a manufactured product rolls off the assembly line, it is essentially a complete, finished product, whereas software is never finished. Software lives, grows, evolves, and metamorphoses, unlike its tangible counterparts. Therefore, the processes and methods to manage, monitor, and measure its ongoing quality are as fluid and sometimes elusive as are the defects that they are meant to keep in check.

Tasks

Quality Engineering the activity consisting of the cohesive collection of all tasks that are primarily performed to ensure and help continually improve the quality of an endeavor’s process and work products

Goals

The typical goals of quality engineering are to:

Ensure that the necessary levels of quality are achieved. Make the achievement of quality predictable and repeatable. Minimize endeavor, organizational, and personal risks due to poor quality.

Objectives

The typical objectives of quality engineering are to:

Define what quality means on the endeavor in terms of a quality model defining quality factors and quality sub factors.

Page 22: Software quality management lecture notes

Plan the quality tasks including helping the requirements team determine and specify the quality requirements and associated quality factors (attributes) and quality metrics.

Assure the quality of the process used by the endeavor. Thus, quality assurance is concerned with fulfilling the quality requirements and achieving the quality factors of the endeavor’s process. “Are we building the products right?”

Control the quality of the work products delivered during the endeavor. Thus, quality control is concerned with fulfilling the quality requirements and achieving the quality factors of the endeavor’s work products. “Are we building the right products?”

Examples

Examples of quality engineering based on scope include:

Application Quality Engineering Business Quality Engineering Contact Center Quality Engineering Data Center Quality Engineering

Preconditions

Quality engineering typically may begin when the following preconditions hold:

The endeavor is started. The quality team is initially staffed and trained in quality engineering.

Completion Criteria

Quality engineering is typically complete when the following post conditions hold:

The endeavor is complete.

Tasks

The following diagram illustrates the relationships between the quality tasks:

Page 23: Software quality management lecture notes

Plan

The purpose of this Software Quality Assurance Plan (SQAP) is to define the techniques, procedures, and methodologies that will be used at the Center for Space Research (CSR) to assure timely delivery of the software that meets specified requirements within project resources.

The use of this plan will help assure the following: (1) That software development, evaluation and acceptance standards are developed, documented and followed. (2) That the results of software quality reviews and audits will be given to appropriate management within CSR. This provides feedback as to how well the development effort is conforming to various CSR development standards. (3) That test results adhere to acceptance standards.

Page 24: Software quality management lecture notes

Teams

The SQA team shall check that the quality is maintained during the project and that the proper quality procedures are being followed, discovered problems are reported to the Project Management. The members of the project team must work according to the part(s) of the SQAP that applies to their specific task.

The tasks of the SQA team

For the first phase of the project (UR), the SQA team must see to it that the following documents are properly reviewed internally before they are submitted for an external review.

The URD

The SQA team must check whether the URD:

o contains a general description of the software that has to be developed; o contains requirements on the software to be developed as stated by the

client; o contains constraints on the software to be developed; o contains a priority list of the requirements.

The SPMP

The SQA team must check whether the goals of the project are clearly described. A life cycle approach for the project must be defined. The SQA team must ensure that the SPMP is realistic by checking:

o the assumptions made during the planning of the project; o restrictions with respect to plan (e.g. availability of members); o external problems (e.g. delivery of PCs, interface card and drivers).

The SCMP

With respect to the SCMP, the SQA team has to check whether the document provides procedures concerning:

o CI identification o CI storage o CI change control o CI status indication

All documents must have a unique identifier and backups must be made at least once every three days.

The SQAP

With respect to the SQAP, the SQA team must check wether the SQAP contains:

Page 25: Software quality management lecture notes

o Project standards o Review procedures o Problem reporting procedures o Responsibilities of the project members with respect to quality assurance

Tasks during SR phase

For the second phase of the project (SR), the SQA team must see to it that the following documents are properly reviewed internally before they are submitted for an external review.

The SRD

The SQA team must check whether the SRD:

o contains requirements on the software to be developed, these requirements must be based on the software requirements stated in the URD;

o contains constraints on the software to be developed, these constraints must be based on the software constaint in the URD;

o contains a priority list of the requirements. o contains a traceability matrix.

The SPMP-SR

The SQA team must ensure that the SPMP is realistic by checking:

o the assumptions made during the planning; o restrictions with respect to the planning (e.g. availability of members); o external problems (e.g. external software/code).

The SCMP-SR

Whith respect to the SCMP, the SQA team must check wether the SCP contains:

o the additional baselines. The SQAP-SR

With respect to the SQAP, the SQA team must check wether the SQAP contains:

o the Tasks of the SQA team during the SR phase.

Page 26: Software quality management lecture notes

Tasks during AD phase

For the third phase of the project (AD), the SQA team must see to it that the following documents are properly reviewed internally before they are submitted for an external review.

The ADD

The SQA team must check whether the ADD:

o contains an architectural design of the software to be developed, this design must describe a logical model and the interfaces between the different classes;

o contains pre and post conditions of the methods in the locical model; o contains a tracibility matrix where the design is checked to the software

requirements in the SRD. The SPMP-AD

The SQA team must ensure that the SPMP is realistic by checking:

o the assumptions made during the planning; o restrictions with respect to the planning (e.g. availability of members); o external problems.

The SCMP-AD

Whith respect to the SCMP, the SQA team must check wether the SCMP contains:

o the additional baselines. The SQAP-AD

With respect to the SQAP, the SQA team must check wether the SQAP contains:

o the tasks of the SQA team during the AD phase.

characteristics of a good QA (Quality Assurnace) Engineer:

• Understanding of business approach and goals of the organization• Understanding of entire software development process• Strong desire for quality• Establish and enforce SQA methodologies, processes and Testing Strategies• Judgment skills to assess high-risk areas of application

Page 27: Software quality management lecture notes

• Communication with Analysis and Development team• Report defects with full evidence• Take preventive actions• Take actions for Continuous improvement• Reports to higher management• Say No when Quality is insufficient• Work Management• Meet deadlines

Documentation:

Project documentation may include many kinds of documents (e.g., plans, task reports, development products, problem reports, phase summary reports). Project size, criticality (i.e., the severity of the consequence of failure of the system), and complexity are some features that may affect the amount of documentation a project should need. For example, the design documentation may consist of a single document describing both the system architecture and the detailed modules or it may consist of separate documents for the architecture and subsystems. The purpose of this section is not to specify how many documents should be required. Rather, this section identifies the information content needed for any project and the timeliness of requirements so that the information can be used by the vendor, the utility, and the NRC reviewers. Because the NRC reviewers cannot determine the characteristics of the software product without substantial technical specifications, project plans, and reports, NRC should specify the technical products of the vendor that the utility must provide NRC.

Review:The reviewers will also need to evaluate the installation package, which consists of installation procedures, installation medium (e.g., magnetic tape), test case data used to verify installation, and expected output from the test cases. In some instances, the product may already be installed in the utility. NRC should request documentation on the results of installation and acceptance testing.

Page 28: Software quality management lecture notes

UNIT III QUALITY CONTROL AND RELIABILITY

Reliability and Quality Control:

Although the terms reliability and quality are often used interchangeably, there is a difference between these two disciplines. While reliability is concerned with the performance of a product over its entire lifetime, quality control is concerned with the performance of a product at one point in time, usually during the manufacturing process. As stated in the definition, reliability assures that components, equipment and systems function without failure for desired periods during their whole design life, from conception (birth) to junking (death). Quality control is a single, albeit vital, link in the total reliability process. Quality control assures conformance to specifications. This reduces manufacturing variance, which can degrade reliability. Quality control also checks that the incoming parts and components meet specifications, that products are inspected and tested correctly, and that the shipped products have a quality level equal to or greater than that specified. The specified quality level should be one that is acceptable to the users, the consumer and the public. No product can perform reliably without the inputs of quality control because quality parts and components are needed to go into the product so that its reliability is assured.

Quality Tools:

Cause Analysis ToolsTips and tools for the first step to improvement: identifying the cause of a problem or situation.

Evaluation and Decision-Making ToolsMaking informed decisions and choosing the best options with a simple, objective rating system, and determining the success of a project.

Process Analysis ToolsHow to identify and eliminate unnecessary process steps to increase efficiency, reduce timelines and cut costs.

Seven Basic Quality ToolsThese seven tools get to the heart of implementing quality principles.

Data Collection and Analysis ToolsHow can you collect the data you need, and what should you do with them once they’re collected?

Idea Creation ToolsWays to stimulate group creativity and organize the ideas that come from it.

Project Planning and Implementing Tools

Page 29: Software quality management lecture notes

How to track a project’s status and look for improvement opportunities.

Seven New Management and Planning ToolsWays to promote innovation, communicate information and successfully plan major projects.

Ishikawa's seven basic tools for quality control are:

checklist (or check sheet), Pareto diagram, histogram, scatter diagram, run chart, control chart, and cause-and-effect diagram.

The following figure shows a simple representation of the tools.

Page 30: Software quality management lecture notes

Check sheet

A check sheet is a paper form with printed items to be checked. Its main purposes are to facilitate gathering data and to arrange data while collecting it so the data can be easily used later. Another type of check sheet is the check-up confirmation sheet. It is concerned mainly with the quality characteristics of a process or a product. To distinguish this confirmation check sheet from the ordinary data-gathering check sheet, we use the term checklist. In most software development environments, the data-gathering aspect is automated electronically and goes far beyond the data-gathering check sheet approach, which has been used in manufacturing production.

Pareto diagram

A Pareto diagram is a frequency chart of bars in descending order; the frequency bars are usually associated with types of problems. In software development, the X-axis for a Pareto diagram is usually the defect cause and the Y-axis the defect count. By arranging the causes based on defect frequency, a Pareto diagram can identify the few causes that account for the majority of defects. It indicates which problems should be solved first in eliminating defects and improving the operation. Pareto analysis is commonly referred to as the 80–20 principle (20% of the causes account for 80% of the defects), although the cause-defect relationship is not always in an 80–20 distribution.

Histogram

The histogram is a graphic representation of frequency counts of a sample or a population. The X-axis lists the unit intervals of a parameter (e.g., severity level of software defects) ranked in ascending order from left to right, and the Y-axis contains the frequency counts. In a histogram, the frequency bars are shown by the order of the X variable, whereas in a Pareto diagram the frequency bars are shown by order of the frequency counts. The purpose of the histogram is to show the distribution characteristics of a parameter such as overall shape, central tendency, dispersion, and skew ness. It enhances understanding of the parameter of interest.

Scatter diagram

A scatter diagram vividly portrays the relationship of two interval variables. In a cause - effect relationship, the X-axis is for the independent variable and the Y-axis for the dependent variable. Each point in a scatter diagram represents an observation of both the dependent and independent variables. Scatter diagrams aid data -based decision making (e.g., if action is planned on the X variable and some effect is expected on the Y variable).Run chart

A run chart tracks the performance of the parameter of interest over time. The X-axis is time and the Y-axis is the value of the parameter. A run chart is best used for trend analysis, especially if historical data are available for comparisons with the current trend.

Page 31: Software quality management lecture notes

An example of a run chart in software is the weekly number of open problems in the backlog; it shows the development team's workload of software fixes.

Control chart

A control chart can be regarded as an advanced form of a run chart for situations where the process capability can be defined. It consists of a central line, a pair of control limits (and sometimes a pair of warning limits within the control limits), and values of the parameter of interest plotted on the chart, which represent the state of a process. The X- axis is real time. If all values of the parameter are within the control limits and show no particular tendency, the process is regarded as being in a controlled state. If they fall outside the control limits or indicate a trend, the process is considered out of con trol. Such cases call for causal analysis and corrective actions are to be taken.

Cause-and-effect diagram

The cause-and-effect diagram, also known as the fishbone diagram, It was first used to explain factors that affect the production of steel. It shows the relationship between a quality characteristic and factors that affect that characteristic. Its layout resembles a fishbone, with the quality characteristic of interest labeled at the fish head, and factors affecting the characteristics placed where the bones are located. While the scatter diagram describes a specific bipartite relationship in detail, the cause-and-effect diagram identifies all causal factors of a quality characteristic in one chart.

1. Cause-and-effect diagram (also called Ishikawa or fishbone chart): Identifies many possible causes for an effect or problem and sorts ideas into useful categories. 2. Check sheet: A structured, prepared form for collecting and analyzing data; a generic tool that can be adapted for a wide variety of purposes. 3. Control charts: Graphs used to study how a process changes over time. 4. Histogram: The most commonly used graph for showing frequency distributions, or how often each different value in a set of data occurs. 5. Pareto chart: Shows on a bar graph which factors are more significant. 6. Scatter diagram: Graphs pairs of numerical data, one variable on each axis, to look for a relationship. 7. Stratification: A technique that separates data gathered from a variety of sources so that patterns can be seen (some lists replace "stratification" with "flowchart" or "run chart").

Case tools:

Computer-aided software engineering (CASE) is the use of software tools to assist in the development and maintenance of software. Tools used to assist in this way are known as CASE Tools.

Some typical CASE tools are:

* Code generation tools * Data modeling tools * UML

Page 32: Software quality management lecture notes

* Refactoring tools * QVT or Model transformation Tools * Configuration management tools including revision control

Preventing, Discovering and Removing Defects

To reduce the number of defects delivered with a software project an organization can engage in a variety of activities. While defect prevention is much more effective and efficient in reducing the number of defects, most organization conduct defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.

Defect Removal Efficiency

If an organization has no defect prevention methods in place then they are totally reliant on defect removal efficiency.

1. Requirements Reviews up to 15% removal of potential defects2. Design Reviews up to 30% removal of potential defects3. Code Reviews up to 20% removal of potential defects4. Formal Testing up to 25% removal of potential defects

In other words, if your organization is great at defect removal the maximum percentage of defects your organization can expect to remove is 90%. If a software project is 100 function points, the total number of maximum (or potential) defects could be 120. If you were perfect at defect removal your project would still have up to 12 defects after all your defect discovery and removal efforts. The far majority of organization would receive a B (medium) or even a D (poor) at defect removal efficiency. Activity Perfect Medium PoorRequirements Reviews 15% 5% 0%Design Reviews 30% 15% 0%Code Reviews 20% 10% 0%Formal Testing 25% 15% 15%Total Percentage Removed 90% 45% 15%

Defect Discovery and Removal

Size in Function PointsTotals Defects Remaining

Points Max Defects Perfect Medium Poor100 120 12 66 102200 240 24 132 204500 600 60 330 5101,000 1,200 120 660 1,0202,500 3,000 300 1,650 2,5505,000 6,000 600 3,300 5,10010,000 12,000 1,200 6,600 10,200

Page 33: Software quality management lecture notes

20,000 24,000 2,000 13,200 20,400

An organization with a project of 2,500 function points and was about medium at defect discovery and removal would have 1,650 defects remaining after all defect removal and discovery activities. The calculation is 2,500 x 1.2 = 3,000 potential defects. The organization would be able to remove about 45% of the defects or 1,350 defects. The total potential defects (3,000) less the removed defects (1,350) equals the remaining defects of 1,650.

Defect Prevention

If an organization concentrates on defect prevention (instead of defect detection) then the number of defects inserted or created is much less. The amount of time and effort required to discover and remove this defects is much less also.

1. Roles and Responsibilities Clearly Defined up to 15% reduction in number of defects created2. Formalized Procedures up to 25% reduction in number of defects created3. Repeatable Processes up to 35% reduction in number of defects created4. Controls and Measures in place up to 30% reduction in number of defects created

Imagine an organization with items 1 and 2 in place. A project with 100 function points would have a potential of 120 defects, but since they have preventative measures in place, they can reduce the number of potential defects by 48 (40% = 25% + 15%). That makes the potential number of defects 72 compared to 120 with no preventative efforts. Assuming that an organization was medium at defect discovery and removal they could remove 45% of the remaining defects or have 40 remaining when the project rolled to production.Defect Removal Max Defects Prevention Medium100 120 72 40200 240 144 79500 600 360 1981,000 1,200 720 3962,500 3,000 1,800 9905,000 6,000 3,600 1,98010,000 12,000 7,200 3,96020,000 24,000 14,400 7,920

The above table represents the number of defects that an organization that does items 1 and 2 above and is medium at discovery and removal.

The problem for estimating defects is multidimensional. First the total number of defects must be estimated. Second the impact of defect prevention needs to be understood and the estimated number of defects adjusted. Third an assessment needs to be done to understand how many defects can be discovered and removed by an organization.

Clearly, the fewer number of defects that an organization must discover and remove the better. The way this is accomplished is by better process, a more stable organization and repeatable processes. The focus of software organizations needs to be on defect prevention instead of defect detection.

Page 34: Software quality management lecture notes

Software Reliability Models:

A proliferation of software reliability models have emerged as people try to understand the characteristics of how and why software fails, and try to quantify software reliability. Over 200 models have been developed since the early 1970s, but how to quantify software reliability still remains largely unsolved. Interested readers may refer to [RAC96], [Lyu95]. As many models as there are and many more emerging, none of the models can capture a satisfying amount of the complexity of software; constraints and assumptions have to be made for the quantifying process. Therefore, there is no single model that can be used in all situations. No model is complete or even representative. One model may work well for a set of certain software, but may be completely off track for other kinds of problems.

Most software models contain the following parts: assumptions, factors, and a mathematical function that relates the reliability with the factors. The mathematical function is usually higher order exponential or logarithmic.

Software modeling techniques can be divided into two subcategories: prediction modeling and estimation modeling. [RAC96] Both kinds of modeling techniques are based on observing and accumulating failure data and analyzing with statistical inference

Rayleigh Model

Rayleigh model has been found to be most suitable for predicting reliability of software product. It predicts the expected value of defect density at different stages of life cycle of the project, once parameters like total number of defects or total cumulative defect rate and peak of the curve in terms of unit of time for the curve are decided.

The nature of curve indicates the pattern of defect removal rate in the life cycle of the project. The area bounded by the x-axis and the curve is the measure of total defects likely to be unearthed from the software being developed.

Below is the plot of the Rayleigh Curve plotted for one of our projects:

Page 35: Software quality management lecture notes

Rayleigh Curve

0

5

10

15

20

25

30

350.05

0.45

0.85

1.25

1.65

2.05

2.45

2.85

3.25

3.65

4.05

4.45

4.85

5.25

5.65

6.05

6.45

6.85

Life cycle stages

Def

ects

/KL

OC

HLD

LLDCODING

UT

IT

Fig 5: Rayleigh plot

The red line indicates the actual defect density observed as against the predicted values (brown smooth curve) obtained through a theoretical model. Observed defect density closely matches with the defect density predicted by the model.

The curve indicates the defect density at the time of system testing as 21 defects.

Reliability growth models for quality assessment:

Software reliability assessment is very important in developing a quality software product efficiently. This paper discusses the quantitative measurement and assessment of software reliability. The techniques are based on the software reliability growth models (SRGM's) developed in Japan, which are characterized by nonhomogeneous Poisson processes. By making the assumptions on which they are based more realistic, the models discussed here were designed to describe a software error-detection process or a software failure-occurrence process during the testing phase of software development. A summary of existing SRGM's is given, and the maximum-likelihood estimations based on the SRGM's are discussed for software reliability data analysis and software reliability assessment. Through the use of a software reliability assessment tool that incorporates several leading SRGM's, examples of software reliability assessment are given for some sets of observed test data taken from actual software projects.

Page 36: Software quality management lecture notes

Unit IV

Quality Management System

Topics covered

1. Concept of quality

2. QMS

3. Elements of qms

4. RAyleigh model

5. Reliability Growth Models

6. Complexity Metrics and Models

7. Customer Satisfaction Analysis

Concept of quality - historical background

The concept of quality as we think of it now first emerged out of The Industrial Revolution. Previously goods had been made from start to finish by The same person or team of people, with handcrafting and tweaking The product to meet 'quality criteria'. Mass production brought huge teams of people together to work on specific stages of production where one person would not necessarily complete a product from start to finish. In The late 1800s pioneers such as Frederick Winslow Taylor and Henry Ford recognized The limitations of The methods being used in mass production at The time and The subsequent varying quality of output. Taylor established Quality Departments to oversee The quality of production and rectifying of errors, and Ford emphasized standardization of design and component standards to ensure a standard product was produced. Management of quality was The responsibility of The Quality department and was implemented by Inspection of product output to 'catch' defects.

Application of statistical control came later as a result of World War production methods. Quality management systems are The outgrowth of work done by W. Edwards Deming, a statistician, after whom The Deming Prize for quality is named.

Quality, as a profession and The managerial process associated with The quality function, was introduced during The second-half of The 20th century, and has evolved since then. Over this period, few other disciplines have seen as many changes as The quality profession.

The quality profession grew from simple control, to engineering, to systems engineering. Quality control activities were predominant in The 1940s, 1950s, and 1960s. The 1970s were an era of quality engineering and The 1990s saw quality systems as an emerging field. Like medicine, accounting, and engineering, quality has achieved status as a recognized profession

Quality Management System (QMS)

Quality Management System (QMS) can be defined as a set of policies, processes and procedures required for planning and execution (production / development / service) in The core business area of an organization. QMS integrates The various internal processes within The organization and intends to provide a process approach for project execution. QMS enables The organizations to identify, measure, control and improve The various core business processes that will ultimately lead to improved business performance.

Page 37: Software quality management lecture notes

ELEMENTS OF QUALITY MANAGEMENT SYSTEMS

The standards of ISO 9000 detail 20 requirements for an organization's quality management system in The following areas:

Management Responsibility

Quality System

Order Entry

Design Control

Document and Data Control

Purchasing

Control of Customer Supplied Products

Product Identification and Tractability

Process Control

Inspection and Testing Control of Inspection, Measuring, and Test Equipment

Inspection and Test Status

Control of Nonconforming Products

Corrective and Preventive Action

Handling, Storage, Packaging, and Delivery

Control of Quality Records

Internal Quality Audits

Training

Servicing

Statistical Techniques

The Rayleigh Model Framework

Perhaps The most important principle in software engineering is "do it right The first time." This principle speaks to The importance of managing quality throughout The development process. theinterpretation of The principle, in The context of software quality management, is threefold:

The best scenario is to prevent errors from being injected into The development process.

When errors are introduced, improve The front end of The development process to remove as many of them as early as possible. Specifically, in The context of The waterfall development process, rigorous design reviews and code inspections are needed. In The Cleanroom methodology, function verification by The team is used.

If The project is beyond The design and code phases, unit tests and any additional tests by The developers serve as gatekeepers for defects to escape The front-end process before The code is integrated into The configuration management system (The system library). In other words, The phase of unit test or pre-integration test (The development phase prior to system integration) is The last chance to do it right The "first time."

The Rayleigh model is a good overall model for quality management. It articulates The points on defect prevention and early defect removal related to The preceding items. Based on The model,

Page 38: Software quality management lecture notes

if The error injection rate is reduced, The entire area under The Rayleigh curve becomes smaller, leading to a smaller projected field defect rate. Also, more defect removal at The front end of The development process will lead to a lower defect rate at later testing phases and during maintenance. Both scenarios aim to lower The defects in The latter testing phases, which in turn lead to fewer defects in The field. The relationship between formal machine-testing defects and field defects, as described by The model, is congruent with The famous counterintuitive principle in software testing by Myers (1979), which basically states that The more defects found during formal testing, The more that remained to be found later. The reason is that at The late stage of formal testing, error injection of The development process (mainly during design and code implementation) is basically determined (except for bad fixes during testing). High testing defect rates indicate that The error injection is high; if no extra effort is exerted, more defects will escape to The field.

If we use The iceberg analogy to describe The relationship between testing and field defect rates, The tip of The iceberg is The testing defect rate and The submerged part is The field defect rate. The size of The iceberg is equivalent to The amount of error injection. By The time formal testing starts, The iceberg is already formed and its size determined. The larger its tip, The larger The entire iceberg. To reduce The submerged part, extra effort must be applied to expose more of The iceberg above The water. Figure 9.1 shows a schematic representation of The iceberg analogy.

Figure 9.1. Iceberg Analogy—Error Injection, Testing Defects, and Latent Defects

Reliability Growth Models

Although reliability growth models are meant for reliability assessment, they are also useful for quality management at the back end of the development process. Models developed from a previous product or a previous release of the same product can be used to track the testing defects of the current product. To have significant improvement, the defect arrival rate (or failure density) of the current project must fall below the model curve. Figure 9.15 shows an example from a

Page 39: Software quality management lecture notes

systems software product developed at IBM Rochester. Each data point represents a weekly defect arrival rate during the system test phase. The defect arrival patterns represented by the triangles and circles indicate two later releases of the same product. Compared to the baseline model curve, both new releases witnessed a significant reduction in defect rate during the system test phase.

Figure 9.15. Reliability Growth Model for Quality Management

As a second example, when another product was just about at the start of system testing, the PTR arrival rates were unusually high compared to the model. It was clear that proceeding in a business-as-usual manner would not result in meeting the product's quality goal. A special quality improvement program (QIP) was then pro-posed, evaluated, approved, and swiftly implemented. The QIP involved five extra activities:

1. Blitz testing— "artistic" testing in stressful environments

2. Customer evaluation— customers conducting testing in the development laboratory

3. Code inspections— additional inspections of error-prone modules, especially routines that are difficult to test such as the error recovery/exception handling routines

4. Design reviews— rereview of designs of suspect components and modules

5. Extension of system test— improvement of test suites and extension of testing schedules to allow thorough final test execution

Because of the special QIP activities, the product ship date was delayed one month. As a result, more than 250 would-be field defects were found and removed. The field quality of the product, evidenced by field defect arrivals reported in later years, improved

Page 40: Software quality management lecture notes

Complexity Metrics and Models

Thus far the reliability and quality management models we have discussed are either at the project or the product level. Both types of model tend to treat the software more or less as a black box. In other words, they are based on either the external behavior (e.g., failure data) of the product or the intermediate process data (e.g., type and magnitude of inspection defects), without looking into the internal dynamics of design and code of the software. In this chapter we describe the relationships between metrics about design and code implementation and software quality. The unit of analysis is more granular, usually at the program-module level. Such metrics and models tend to take an internal view and can provide clues for software engineers to improve the quality of their work.

Reliability models are developed and studied by researchers and software reliability practitioners with sophisticated skills in mathematics and statistics; quality management models are developed by software quality professionals and product managers for practical project and quality management. Software complexity research, on the other hand, is usually conducted by computer scientists or experienced software engineers. Like the reliability models, many complexity metrics and models have emerged in the recent past. In this chapter we discuss several key metrics and models, and describe a real-life example of metric analysis and quality improvement.

. Lines of Code

The lines of code (LOC) count is usually for executable statements. It is actually a count of instruction statements. The interchangeable use of the two terms apparently originated from Assembler program in which a line of code and an instruction statement are the same thing. Because the LOC count represents the program size and complexity, it is not a surprise that the more lines of code there are in a program, the more defects are expected. More intriguingly, researchers found that defect density (defects per KLOC) is also significantly related to LOC count. Early studies pointed to a negative relationship: the larger the module size, the smaller the defect rate. For instance, Basili and Perricone (1984) examined FORTRAN modules with fewer than 200 lines of code for the most part and found higher defect density in the smaller modules. Shen and colleagues (1985) studied software written in Pascal, PL/S, and Assembly language and found an inverse relationship existed up to about 500 lines. Since larger modules are generally more complex, a lower defect rate is somewhat counterintuitive. Interpretation of this finding rests on the explanation of interface errors: Interface errors are more or less constant regardless of module size, and smaller modules are subject to higher error density because of smaller denominators.

More recent studies point to a curvilinear relationship between lines of code and defect rate: Defect density decreases with size and then curves up again at the tail when the modules become very large. For instance, Withrow (1990) studied modules written in Ada for a large project at Unisys and confirmed the concave relationship between defect density (during formal test and integration phases) and module size (Table 11.1). Specifically, of 362 modules with a wide range in size (from fewer than 63 lines to more than 1,000), Withrow found the lowest defect density in the category of about 250 lines. Explanation of the rising tail is readily available. When module size becomes very large, the complexity increases to a level beyond a programmer's immediate span of control

Page 41: Software quality management lecture notes

and total comprehension. This new finding is also consistent with previous studies that did not address the defect density of very large modules.

Experience from the AS/400 development also lends support to the curvilinear model. In the example in Figure 11.1, although the concave pattern is not as significant as that in Withrow's study, the rising tail is still evident.

Figure 11.1. Curvilinear Relationship Between Defect Rate and Module Size—AS/400 data

The curvilinear model between size and defect density sheds new light on software quality engineering. It implies that there may be an optimal program size that can lead to the lowest defect rate. Such an optimum may depend on language, project, product, and environment; apparently many more empirical investigations are needed. Nonetheless, when an empirical optimum is derived by reasonable methods (e.g., based on the previous release of the same product, or based on a similar product by the same development group), it can be used as a guideline for new module development.

Customer Satisfaction Analysis

Page 42: Software quality management lecture notes

What The clients really want is The interpretation and analysis of the data to provide actionable information: What can we learn from it? What actions does it suggest that we take to improve customer satisfaction with what we offer?

The analysis process starts by performing statistical tests to reveal relationships or differences in customer ratings of The performance on different product and service attributes, and how they affect overall satisfaction. We compare The performance to The peers, utilizing The Benchmark database to describe how The performance rates on a relative basis. We identify The customers' product and service priorities and we compare these to their perceptions of The performance through Quadrant Analysis. We look for gaps in performance versus expectations in The search for major opportunities for improvement.

We augment The analysis of the quantitative survey data with careful study of the qualitative information – the comments and observations made by The customers. These are an invaluable The of insight into the reasons behind their ratings. In most cases, The analysis is aimed at identifying the key drivers of satisfaction – those product or service elements that are most closely related to customer satisfaction.

The clients benefit from the perspective of The years of experience with a wide variety of clients. Applying The knowledge, The goal is to deliver actionable results – information you can use to create change that will improve The competitive position and The bottom line

ISO 9000

ISO 9000 is a family of standards for quality management systems. ISO 9000 is maintained by ISO, the International Organization for Standardization and is administered by accreditation and certification bodies. Some of the requirements in ISO 9001 (which is one of the standards in the ISO 9000 family) would include

Page 43: Software quality management lecture notes

a set of procedures that cover all key processes in the business; monitoring processes to ensure they are effective; keeping adequate records; checking output for defects, with appropriate corrective action where necessary; regularly reviewing individual processes and the quality system itself for effectiveness; and facilitating continual improvement

A company or organization that has been independently audited and certified to be in conformance with ISO 9001 may publicly state that it is "ISO 9001 certified" or "ISO 9001 registered." Certification to an ISO 9000 standard does not guarantee the compliance (and therefore the quality) of end products and services; rather, it certifies that consistent business processes are being applied.Although the standards originated in manufacturing, they are now employed across a wide range of other types of organizations. A "product", in ISO vocabulary, can mean a physical object, or services, or software. In fact, according to ISO in 2004, "service sectors now account by far for the highest number of ISO 9001:2000 certificates - about 31% of the total" [1]

UNIT V QUALITY STANDARD

Topics Covered: Need for standards ISO 9000 series ISO 9000-3 series CMM and CMMI Six Sigma Concepts

1.Need for Standards:

Standardization (or standardisation) is the process of agreeing on technical standards. A standard is a document that establishes uniform engineering or technical specifications, criteria, methods, processes, or practices. Some standards are mandatory while others are voluntary. Some standards are defacto, meaning informal practices followed out of convenience, or dejure, meaning formal requirements. Formal standards bodies such as the International Standard Organisation(ISO) or the American National Standard University are independent of the manufacturers of the goods for which they publish standards.

The goals of standardization can be to help with independence of single suppliers (commodification), compatibility, interoperability, safety, repeatability, or quality.

In social sciences, including economics, the idea of standardization is close to the solution for a coordination problem, a situation in which all parties can realize mutual gains, but only by making mutually consistent decisions. Standardization is the process for select better choices and ratificate this consistent decisions, as an obtained standard.

Page 44: Software quality management lecture notes

Types of standardization process:

Emergence in de facto use: tradition (old standards on countries) and/or domination (Microsoft ex.).

Fixed by a standard body: o in an impositive process: usually in mandatory norms (like dictatorial Laws). o in a consensus process: usually for voluntary standards

standardization is defined as: The development and implementation of concepts, doctrines, procedures and designs to achieve and maintain the required levels of compatibility, interchangeability or commonality in the operational, procedural, material, technical and administrative fields to attain interoperability

2.ISO 9000 Series:

ISO 9000 is a family of standards for quality management systems. ISO 9000 is maintained by ISO, the International Organization for Standardization and is administered by accreditation and certification bodies.

ISO-9001 covers the entire process from product design through after sales service.

ISO-9002 covers only the manufacture (or a specific service such as a QC/QA laboratory) of the product.

ISO-9003 covers the "final inspection" of the product only.

ISO-9004 is an "Internal Use" standard - it cannot be registered and is not subject to the third party audits.

Page 45: Software quality management lecture notes
Page 46: Software quality management lecture notes
Page 47: Software quality management lecture notes
Page 48: Software quality management lecture notes
Page 49: Software quality management lecture notes
Page 50: Software quality management lecture notes
Page 51: Software quality management lecture notes
Page 52: Software quality management lecture notes
Page 53: Software quality management lecture notes
Page 54: Software quality management lecture notes

ISO-9001:

Some of the requirements in ISO 9001 (which is one of the standards in the ISO 9000 family) would include

a set of procedures that cover all key processes in the business; monitoring processes to ensure they are effective; keeping adequate records; checking output for defects, with appropriate corrective action where necessary; regularly reviewing individual processes and the quality system itself for effectiveness;

and facilitating continual improvement

A company or organization that has been independently audited and certified to be in conformance with ISO 9001 may publicly state that it is "ISO 9001 certified" or "ISO 9001 registered." Certification to an ISO 9000 standard does not guarantee the compliance (and therefore the quality) of end products and services; rather, it certifies that consistent business processes are being applied.Although the standards originated in manufacturing, they are now employed across a wide range of other types of organizations. A "product", in ISO vocabulary, can mean a physical object, or services, or software. In fact, according to ISO in 2004, "service sectors now account by far for the highest number of ISO 9001:2000 certificates - about 31% of the total"

ISO 9001:2000 specifies requirements for a quality management system where an organization

1. needs to demonstrate its ability to consistently provide product that meets customer and applicable regulatory requirements, and

2. aims to enhance customer satisfaction through the effective application of the system, including processes for continual improvement of the system and the assurance of conformity to customer and applicable regulatory requirements.

All requirements of this International Standard are generic and are intended to be applicable to all organizations, regardless of type, size and product provided.

Page 55: Software quality management lecture notes

Where any requirement(s) of this International Standard cannot be applied due to the nature of an organization and its product, this can be considered for exclusion.

Where exclusions are made, claims of conformity to this International Standard are not acceptable unless these exclusions are limited to requirements within clause 7, and such exclusions do not affect the organization's ability, or responsibility, to provide product that meets customer and applicable regulatory requirements.

ISO-9002:

Developing products to a standard of consistent high quality, Scorpion Research is a recognised leader in its field and proud to announce its achievement of the coveted ISO 9002 quality certification.

The ISO 9002 standard is an emerging global standard for product and process quality, adopted by 91 countries which comprise the International Organisation for Standardisation (ISO).

ISO 9002 is best known to European countries who use the certification as a means of identifying companies dedicated to providing a customer with the best product and service possible.

Scorpion Research has always emphasised its quality through a continuous improvement program complementing its other quality initiatives. Achieving ISO 9002 certification further demonstrates commitment to a consistent level of quality recognised under International standards.

According to ISO assessors, the certification process is rigorous and only a minority of the applicants who strive to become certified are successful on their first audit. Scorpion Research is amongst this minority of organisations that passed the required certification audit on the first assessment demonstrating its consistent emphasis towards a high quality standard.

Certification is an ongoing process and Scorpion Research will undergo regular internal and external audits by third party ISO assessors in order to retain its certification. The ISO certification complements the quality program by establishing an externally verified unbiased process of self-monitoring and problem-solving.

Scorpion Research joins firms such as British Telecom, Sony, IBM, Compaq, Digital Equipment, Xerox, Toshiba and Hewlett-Packard in ISO certification.

ISO-9003:ISO 9003 This is the least complex and easiest to install of three standards in the ISO 9000 series. This standard is for organizations that do not participate in design and development, purchasing or have

Page 56: Software quality management lecture notes

production controls. It is designed for organizations that only require final inspection and testing of their products and services to ensure that they have met the specified requirements. This system is generally only relevant to simple products and services. It is also an option for organizations that cannot justify the expense of one of the other systems but still desire a quality management system for their organization.

ISO-9004: Your quality system must balance two needs:

Your customers' need to have quality products at a reasonable price.

Your company's need to make quality products at a reasonable cost. 

3.ISO9000-3 for software development:

ISO 9000-3 4.4 Software development and design

4.4.1General 

Develop and document procedures to control the product design and development process. These procedures must ensure that all requirements are being met.

Software development 

Control your software development project and make sure that it is executed in a disciplined manner.

Use one or more life cycle models to help organize your software development project.

Develop and document your software development procedures. These procedures should ensure that:

Software products meet all requirements. Software development follows your:

Quality plan. Development plan.

Software design Control your software design process and make sure that it is performed in a systematic way.

Use a suitable software design method. Study previous software design projects 

to avoid repeating old mistakes.Design software that is:

Easy to test, install, use, and maintain. Safe and reliable when failure could cause:

Human injury. Property damage. Environmental harm.

Develop and document rules to control: Coding activities. Naming conventions. Commentary practices.

Page 57: Software quality management lecture notes

Programming languages. Apply configuration management techniques to document and control the use and review of all:

Analysis tools. Design techniques. Compilers and assemblers.

Train personnel in the use of such tools and techniques.

4.4.2Design and development planning 

Create design and development planning procedures. Your product planning procedures should ensure that:

Plans are prepared for each design activity or phase. Responsibility for implementing each plan, 

activity, or phase is properly defined. Qualified personnel are assigned to the 

product design and development process. Adequate resources are allocated to the 

product design and development process. Plans are updated, and circulated to the 

appropriate participants, as designs change.

Software design and development planning

Prepare a software development plan. Your plan should be documented and approved before it is implemented. Your plan should control:

Technical activities.  Requirements analyses. Design processes. Coding activities. Integration methods. Testing techniques. Installation work. Acceptance testing.

Management activities. Project supervision. Progress reviews. Reporting requirements.

Your software development plan should: Define your project. Identify related plans and projects. List your project objectives. Define project inputs and outputs.

Define inputs for each project activity. Define outputs for each project activity.

Explain how your project will be organized. Explain how your teams will be structured. Explain who will be responsible for what. Explain how subcontractors will be used. Explain how project participants will interact.

Page 58: Software quality management lecture notes

Explain how all resources will be managed. Discuss project risks and potential problems. Identify important project assumptions. Present your project schedule.

Define project phases and dependencies. Specify project time lines and milestones. Introduce your project budget. Describe the work that will be done.

Describe each task. Describe the inputs for each task. Describe the outputs for each task.

Identify all relevant control strategies. Identify all relevant standards and conventions. Identify all relevant rules and regulations. Identify all relevant practices and procedures.

Identify configuration management practices.

Identify backup and recovery procedures. Identify archiving procedures.

Identify all relevant methods and approaches. Identify methods used to control

nonconforming products. Identify methods used to control software

development software Identify methods used to control virus

protection activities. Identify all relevant tools and techniques.

Identify methods used to qualify all tools and techniques.

Identify methods used to control all tools and techniques.

4.4.3Organizational and technical interfaces

Identify the groups who should be routinely involved in the product design and development process, and ensure that their design input is properly documented, circulated, and reviewed.  Make sure that your software development plan or your subcontractors' plans:

Define how the responsibility for software development will be distributed between all participants.

Define how technical information will be shared and transmitted between all participants.

Explain how all project participants will provide input. Explain how subcontractors will provide input during 

design, installation, maintenance, and training. Explain how regulatory authorities will provide input 

during design, installation, maintenance, and training.

Page 59: Software quality management lecture notes

Explain how help desk staff will provide input during design, installation, maintenance, and training.

Explain how other related projects will provide input during design, installation, maintenance, and training.

Explain how end users will provide input during design, installation, maintenance, and training.

Make sure that your customer has accepted the responsibility to:

Cooperate and support your project. Provide the information you need when you need it. Resolve outstanding issues in a timely manner.

Make sure that your customer representative has been given the responsibility and authority to:

Clarify requirements and expectations. Answer questions and solve problems. Make and implement agreements. Approve plans and proposals. Establish acceptance criteria. Provide appropriate customer-supplied products. Define and distribute authorities and responsibilities.

Schedule joint progress reviews. You and your customer together should review:

Activities. Your developmental activities. Your customer's activities. Your users' activities.

Training activities. Conversion activities.

Results. Results of verification activities. Results of acceptance tests. Results of conformance evaluations.

4.4.4Design input 

Develop procedures to ensure that all design-input requirements are identified, documented, and reviewed; and that all design flaws, ambiguities, contradictions, and deficiencies are resolved. Design input requirements can be classified as follows:

Customer expectations. Contractual conditions. Statutory imperatives. Regulatory requirements. Environmental constraints. Safety considerations. Performance standards. Functional specifications. Descriptive prescriptions. Aesthetic preferences.

Page 60: Software quality management lecture notes

Software design input

Design input requirements should be specified by the customer. However, sometimes the customer will expect you to develop the design input specification. In this case, you should:

Prepare procedures that you can use to develop the design input specification. These procedures should be documented and should explain:

How interviews, surveys, studies, prototypes, and demonstrations will be used to develop your design-input specification.

How you and your customer will formally agree: To accept the official specification. To accept changes to the official

specification. How changes to specifications will be controlled. How prototypes and product demonstrations will

be evaluated. How system oriented input requirements will be

met through the use of hardware, software, and interface technologies.

How reviews, evaluations, and other discussions between you and your customer will be recorded.

Work closely with your customer in order to avoid misunderstandings and to ensure that the specification meets the customers needs.

Express your specification using terms that will make it easy to validate during product acceptance.

Ask your customer to formally approve theresulting design input specification.

Your design input specification may address the following kinds of characteristics or requirements:

Functional requirements. Reliability requirements. Usability requirements. Efficiency requirements. Maintainability requirements. Portability requirements. Interface requirements.

Hardware interface requirements. Software interface requirements.

Your design input specification may also needto address the following kinds of requirements:

Operational requirements. Safety requirements. Security requirements.  Statutory requirements.

4.4.5 Develop procedures to control design outputs.

Page 61: Software quality management lecture notes

Design output Design outputs are usually documents. They include drawings, parts lists, process specifications, servicing procedures, and storage instructions. These types of documents are used for purchasing, production, installation, inspection, testing, and servicing.

Design outputs must be expressed in terms that allow them to be compared with design input requirements.

Design output documents must identify those aspects of the product that are crucial to its safe and effective operation. These aspects can include operating, storage, handling, maintenance, and disposal requirements.

Design output documents must be reviewed and approved before they are distributed.

Design outputs must be accepted only if they meet official acceptance criteria.

Software design output 

Prepare design output documents using standardized methods and make sure that your documents are correct and complete.

Software design outputs can include design specifications, source code, user guides, etc.

4.4.6Design review

Develop procedures that specify how design reviews should be planned and performed. Design review procedures should:

Be formally documented. Ensure that reviews are recorded. Ensure that representatives from all relevant 

areas are involved in the process of review.

Software design review

Plan and perform design reviews for software development projects. Your reviews should ensure that all:

Design activities and results are reviewed. Product nonconformities are identified and addressed. Process deficiencies are identified and addressed. Review conclusions and observations are recorded.

Software design review procedures

Develop and document design review procedures. Your procedures should make sure that you:

Clarify exactly what is being reviewed. Distinguish between different types of reviews. Organize and schedule design review meetings. Indicate when design reviews should be performed. Maintain a record of all design review meetings. Invite all appropriate groups to participate.

Invite customers to participate (when required). Confirm that customers agree with your

results.

Page 62: Software quality management lecture notes

Allow design activities to continue only if all: Deficiencies and nonconformities have been

addressed. Risks and consequences have been assessed.

Your design review procedures may also: Define the methods that should be used to ensure 

that all rules and conventions are being followed. Define what needs to be done to prepare 

for a design review. This may include:  Defining roles. Listing objectives. Preparing an agenda. Collecting documents.

Define what should be done during the review. This may include:

Defining the guidelines that should be followed. Defining the techniques that should be used.

Define the criteria that constitute a successful review. Define the follow-up methods that should be used to 

ensure that all outstanding issues will be addressed.

4.4.7Design verification

Develop procedures that specify how design outputs, at every stageof the product design and development process, should be verified.These procedures should:

Verify that outputs satisfy design-input requirements. Ensure that objective evidence is used to verify outputs. Ensure that all design verifications are recorded. Ensure that all design documents are verified.

These design verification procedures may also: Use alternative calculations to verify design outputs. Use tests and demonstrations to verify outputs. Compare design outputs with proven designs.

Software design verification

Verify design outputs by: Performing design reviews. Performing demonstrations. 

Evaluating prototypes. Carrying out simulations.

Performing tests. Maintain a record of design verifications.

Record verification results. Record remedial actions. Record action completions.

Accept design outputs for subsequent use only if they have been properly verified and only if all remedial actions have been taken.

Page 63: Software quality management lecture notes

4.4.8Designvalidation

Develop procedures that validate the assumption that your newlydesigned products will meet customer needs. Develop designvalidation procedures that:

Confirm that your new product performs properly under all real-world operating conditions.

Confirm that your new product will meet everylegitimate customer need and expectation.

Ensure that validations are carried out early in the design process whenever this will help guarantee that customer needs will be met.

Software design validation

Prove that your product is ready for its intended use before you ask your customer to accept it.

Accept validated products for subsequent use only if they have been properly verified and only if all remedial actions have been taken.

Maintain a record of design validations. Record validation results. Record remedial actions. Record action completions.

4.4.9Design changes

Develop procedures to ensure that all product design modifications aredocumented, reviewed, and formally authorized before the resultingdocuments are circulated and the changes are implemented.

Software design changes

Develop procedures to control design changes that may occur duringthe product life cycle. Your procedure should ensure that you:

Document the design change. Evaluate the design change. Justify the design change. Verify the design change. Approve the design change. Implement the design change. Monitor the design change.

Your configuration management process may be used to control design changes.

4.CMM and CMMI

Page 64: Software quality management lecture notes
Page 65: Software quality management lecture notes
Page 66: Software quality management lecture notes

The Capability Maturity Model (CMM), also sometimes referred to as the Software CMM (SW-CMM).The CMM is a process capability model based on software development organisation processes/practices.

Though the CMM was retired in 1997 and has not been updated since, having been superseded by CMMI (Capability Maturity Model Integration), it has been used as a generally applicable model to assist in understanding the process capability maturity of organisations in diverse areas. For example, software engineering, system engineering, project management, risk management, system acquisition, information technology (IT) or personnel management, against a scale of five maturity levels, namely: Initial, Repeatable, Defined, Managed and Optimized.

The Capability Maturity Model (CMM) is a process capability maturity model which aids in the definiton and understanding of an organisation's processes. The CMM was originally used to enable the assessment of software development processes.

Capability Maturity Model• CMM model strives to achieve predictability and consistency as aprecursor to continuous improvements by following a set of process in a well defined framework.• Level 1 is Initial level• Level 2 is repeatable which helps in achieving repeatability of performance and quality should the organizations undertake a similar project again.• Level 3 is defined level• Level four is measured level.• Level 5 is optimistic level , here people always work towards a target.

The Role of CMM

The role of CMM is increasing. This may be attributed to a number of factors:

The maturity of the model itself The increasing general awareness of the need for externally recognized quality

standards

Page 67: Software quality management lecture notes

The adoption of model by key software purchases such as national departments of defense.

Advantages of CMM

It allows for improvement and evolution Also used in conjunction with other quality standards It highlights the defects as they occur. The CMM prioritizes tasks for improvement It provides a matrix for strengths and weakness

Schematic view of CMM

Page 68: Software quality management lecture notes

The maturity model seeks to measure how well these processes are carried out.There are five stages to six measurement categories in subjectively rating an organization’s quality operation.

The five stages are: Uncertainty, where management is confused and uncommitted regarding quality

management tool Awakening, where management is beginning to recognize that quality

management can help Enlightenment, where the decision is made to conduct a formal quality

improvement programme Wisdom, where the company has the chance to make changes permanent (things

are basically quiet and people wonder why they used to have problems); Certainty, where quality management is considered an absolutely vital part of

company management.

The six measurement categories are: management understanding and attitude, characterized as ‘no comprehension of

quality as a management tool’ at uncertainty and ‘an essential part of the compa ny system’ at certainty;

quality organization status, characterized as hidden at uncertainty and a thought leader/main concern at certainty; problem handling, which are fought when they occur at uncertainty and prevented

at certainty; cost of quality as percentage of sales, characterized as 20% at uncertainty and

2.5% at certainty; quality improvement actions, characterized as no organized activities at

uncertainty and a normal and continued activity at certainty; Summation of company quality posture, summarized as ‘we do not know why we

have problems with quality’ at uncertainty and ‘we know why we do not • have problems with quality’ at certainty.

As a precursor to the maturity model itself, Radice et.al. (1985) working with Humphrey, identified 12 process stages. The process stages were stages in the lifecycle:

Page 69: Software quality management lecture notes

Evolution of the CMM

Year Version published

1987 Software process maturity framework

1987 Preliminary maturity questionnaire

1987 Characterizing the software process

1989 Managing the software process

1990 Draft version of CMM v0.2

1991 CMM v0.6 discussion

1991 CMM v1.0

1993 CMM v1.1

The CMMI is organized around a set of Process Areas (PAs). The PAs are divided into groups associated with what are called maturity levels. In the CMMI, most basic management practices are considered part of maturity level 2, while most software engineering practices are associated with level 3. Level 4 is about process and product quality management, while level 5 includes processes for process optimization and technology change management.

Page 70: Software quality management lecture notes

An organization that has practices that meet all the goals of maturity level 2 is characterized as a level 2 organization. An organization that cannot demonstrate that its practices meet all the applicable level 2 goals is considered level 1 by default. An organization that meets the goals of all the PAs at level 2 and level 3 is a level 3 maturity organization and so on.

The CMMI also offers an alternate approach call the "continuous representation", where individual PA's are rated on a maturity scale of 1 - 5. This allows the organization the flexibility to tailoring its maturity goals based on business needs for particular PA's.

As organizations move up the maturity ladder, they are more likely to produce higher quality products with more predictable costs and cycle times. The higher levels are more likely to correlate with higher productivity and shorter cycle times as well.  

The idea of model based software improvement is very simple. First pick the applicable Process Areas. Next perform an assessment of organizational practices relative to the model. The organization practices do not have to conform exactly to the representative practices included in the model. They just have to meet the stated goals of each PA. Based on this comparison, assign a maturity level to the organization and produce a list of strengths and weakness relative to the model.

The output of the assessment is used to prioritize areas for improvement. The idea is to improve deficient practices at the lower maturity levels first and systematically move up in maturity level over time using additional assessments to measure progress.

Focusing on the lowest level practices puts a firm foundation in place before tacking the higher level processes and it give the organization time to internalize the changes. This approach nicely avoids the twin problems of putting practices in place before required supporting practices are available and trying to do too much too soon.

So model based improvement has a lot of very attractive features. One of the most attractive ones is the level goal itself. The numerical level gives organizations a metric that that can be easily understood, can used to measure progress, and can used to benchmark against other organizations.

SEI has defined a formal appraisal methodology and provided a lead assessor training and certification program. This means that assessment results, particular those obtained using a third party assessor, will be reasonably consistent and can be used to benchmark against other organizations. In fact SEI maintains a publicly accessible database of assessment results giving the number of organizations at each level by industry and the average time for an organization to improve from one level to the next.

6.Six Sigma Concept:

Six Sigma at many organizations simply means a measure of quality that strives for near perfection. Six Sigma is a disciplined, data-driven approach and methodology for eliminating defects (driving towards six standard deviations between the mean and the nearest specification limit) in any process -- from manufacturing to transactional and from product to service.

Page 71: Software quality management lecture notes

The term Six Sigma represents a stringent level of quality. It is a specific defect rate: 3.4 defective parts per million (ppm). It was made known in the industry by Motorola, Inc.,

Six Sigma has become an industry standard as an ultimate quality goal. Sigma (s) is the Greek symbol for standard deviation.As the following figure1 indicates, the areas under the curve of normal distribution defined by standard deviations are constants in terms of percentages, regardless of the distribution parameters.

The area under the curve as defined by plus and minus one standard deviation (sigma)from the mean is 68.26%.

The area defined by plus/minus two standard deviations is 95.44%, and so forth. The area defined by plus/minus six sigma is 99.9999998%. The area outside the six sigma area is thus 100% -99.9999998% = 0.0000002%.

Fig. 1

The area within the six sigma limit as the percentage of defect-free parts and the area outside the limit as the percentage of defective parts, it is found that six sigma is equal to2 defectives per billion parts or 0.002 defective parts per million.

The interpretation of defect rate as it relates to the normal distribution will be clearer if

Page 72: Software quality management lecture notes

we include the specification.

Given the specification limits (which were derived from customers' requirements), our purpose is to produce parts or products within the limits. Parts or products outside the specification limits do not conform to requirements. If we can reduce the variations in the production process so that the six sigma (standard deviations) variation of the production process is within the specification limits, then we will have six sigma quality level.

Page 73: Software quality management lecture notes

Fig. 2 Specification Limits, Centered six sigma, and Shifted six sigma

The six sigma value of 0.002 ppm is from the statistical normal distribution. It assumes that each execution of the production process will produce the exact distribution of parts or products centered with regard to the specification limits. In reality, however, process shifts and drifts always result from variations in process execution.

The maximum process shifts as indicated by research is 1.5 sigma. If we account for this1.5-sigma shift in the production process, we will get the value of 3.4 ppm. Such shifting is illustrated in the two lower panels of fig.2.

Given fixed specification limits, the distribution of the production process m ay shift to the left or to the right. When the shift is 1.5 sigma, the area outside the specification limit on one end is 3.4 ppm, and on the other it is nearly zero.

The slight difference between the centered six sigma and the shifted six sigma may imply something significant. The former is practically equivalent to zero defects, which may invite the debate whether it is feasible to achieve such a goal.

In order to reach six sigma, we have to improve the process. Specifically, we must reduce process variations so that the six sigma variation is still within the specification limits.

The concept and approach of six sigma has been expanded and applied to the improvement of management systems and total quality management.

Page 74: Software quality management lecture notes

The statistical representation of Six Sigma describes quantitatively how a process is performing. To achieve Six Sigma, a process must not produce more than 3.4 defects per million opportunities. A Six Sigma defect is defined as anything outside of customer specifications. A Six Sigma opportunity is then the total quantity of chances for a defect. Process sigma can easily be calculated using a Six-Sigma calculator.

The fundamental objective of the Six Sigma methodology is the implementation of a measurement-based strategy that focuses on process improvement and variation reduction through the application of Six Sigma improvement projects. This is accomplished through the use of two Six Sigma sub-methodologies: DMAIC and DMADV. The Six Sigma DMAIC process (define, measure, analyze, improve, control) is an improvement system for existing processes falling below specification and looking for incremental improvement. The Six Sigma DMADV process (define, measure, analyze, design, verify) is an improvement system used to develop new processes or products at Six Sigma quality levels. It can also be employed if a current process requires more than just incremental improvement. Both Six Sigma processes are executed by Six Sigma Green Belts and Six Sigma Black Belts, and are overseen by Six Sigma Master Black Belts.

Six Sigma is a set of practices originally developed by Motorola to systematically improve processes by eliminating defects.A defect is defined as nonconformity of a product or service to its specifications.

While the particulars of the methodology were originally formulated by Bill Smith at Motorola in 1986, Six Sigma was heavily inspired by six preceding decades of quality improvement methodologies such as quality control, TQM, and Zero Defects. Like its predecessors, Six Sigma asserts the following:

Continuous efforts to reduce variation in process outputs is key to business success Manufacturing and business processes can be measured, analyzed, improved and

controlled Succeeding at achieving sustained quality improvement requires commitment from the

entire organization, particularly from top-level management

Sigma (the lower-case Greek letter σ) is used to represent standard deviation (a measure of variation) of a population (lower-case 's', is an estimate, based on a sample). The term "six sigma process" comes from the notion that if one has six standard deviations between the mean of a process and the nearest specification limit, there will be practically no items that fail to meet the specifications. This is the basis of the Process Capability Study, often used by quality professionals. The term "Six Sigma" has its roots in this tool, rather than in simple process standard deviation, which is also measured in sigmas. Criticism of the tool itself, and the way that the term was derived from the tool, often sparks criticism of Six Sixma.

Page 76: Software quality management lecture notes