control charts and taguchi methods

23
UNIT 6 ON-LINE AND OFF-LINE QUALITY CONTROL Structure 6.1 Introduction Objectives 6.2 Statistical Process Control 6.3 Assignable and Random Causes 6.4 Normal Distribution and Central Limit Theorem 6.5 Control Charts for Variables 6.5.1 The X-bar and R-charts 6.5.2 The X-bar and s-charts 6.5.3 Moving MeanfRange Chart 6.6 Control Charts for Attributes 6.6.1 Thep-chart 6.6.2 The np-chart 6.6.3 The u-chart 6.6.4 The c-chart 6.7 CUSUM Chart 6.8 Pre-control for Quality Control 6.9 Off-line Quality Control ' 6.10 Taguchi Method 6.10.1 Loss Function 6.10.2 Parameter Design 6.10.3 Performance Measure 6.10.4 Taguchi's Tolerance Design 6.1 1 Comparison of Taguchi and Deming Approach 6.12 Summary 6.13 Key Words 6.14 Answers to SAQs 6.1 INTRODUCTION Quality control is as old as industry itself. From the time man began to manufacture, there has been interest in the quality of output. The main objective in any production process is to control the quality of the finished product so that it conforms to specifications. This ensures the absence of a large number of defective items. Statistical Process Control (SPC) is a new concept to solve the problems of quality control. Much credit for applying concepts of statistics to problems of quality control goes to Walter Shewhart. Shewhart invented the control chart to reduce variability in the performance of telephones at Bell Laboratories. However, Edwards Deming recognized the usefulness of these concepts in manufacturing and non-manufacturing sector. The Japanese industries, stimulated by Deming's teachings on quality, were the first to incorporate SPC in practice by using it as a technical tool for quality management. The American industries soon adopted statistical methods for quality improvement. SPC can be divided into two major categories - on line quality control and ofline quality control. If quality improvement can be attained during the process of production,

Upload: mit

Post on 29-Nov-2014

478 views

Category:

Engineering


3 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Control charts and Taguchi methods

UNIT 6 ON-LINE AND OFF-LINE QUALITY CONTROL

Structure

6.1 Introduction

Objectives

6.2 Statistical Process Control

6.3 Assignable and Random Causes

6.4 Normal Distribution and Central Limit Theorem

6.5 Control Charts for Variables

6.5.1 The X-bar and R-charts 6.5.2 The X-bar and s-charts 6.5.3 Moving MeanfRange Chart

6.6 Control Charts for Attributes

6.6.1 Thep-chart 6.6.2 The np-chart 6.6.3 The u-chart 6.6.4 The c-chart

6.7 CUSUM Chart

6.8 Pre-control for Quality Control

6.9 Off-line Quality Control '

6.10 Taguchi Method

6.10.1 Loss Function

6.10.2 Parameter Design 6.10.3 Performance Measure 6.10.4 Taguchi's Tolerance Design

6.1 1 Comparison of Taguchi and Deming Approach

6.12 Summary

6.13 Key Words

6.14 Answers to SAQs pp~- ~~ ~p

6.1 INTRODUCTION

Quality control is as old as industry itself. From the time man began to manufacture, there has been interest in the quality of output. The main objective in any production process is to control the quality of the finished product so that it conforms to specifications. This ensures the absence of a large number of defective items. Statistical Process Control (SPC) is a new concept to solve the problems of quality control. Much credit for applying concepts of statistics to problems of quality control goes to Walter Shewhart. Shewhart invented the control chart to reduce variability in the performance of telephones at Bell Laboratories. However, Edwards Deming recognized the usefulness of these concepts in manufacturing and non-manufacturing sector. The Japanese industries, stimulated by Deming's teachings on quality, were the first to incorporate SPC in practice by using it as a technical tool for quality management. The American industries soon adopted statistical methods for quality improvement.

SPC can be divided into two major categories - on line quality control and o f l i n e quality control. If quality improvement can be attained during the process of production,

Page 2: Control charts and Taguchi methods

Quality Tools - Statistical the technique is called on-line quality control. Off line quality control refers to the techniques used before the actual process of production starts. Quality is thus built into the product and the process as early as at the design stage. This approach eliminates the need for mass inspection that is a wasteful activity. In the manufacturing industry, off line quality control is incorporated through experimental design methods. Professor Genichi Taguchi of Japan pioneered this method. Also called Taguchi method, it ensures good performance of products or processes by identifying and controlling the factors that contribute to variation. In this unit, the techniques of SPC shall be discussed along with principles of on line as well as off line quzlity control tools.

Objectives After studying this unit, you should be able to

understand the assignable and random causes,

explain the various types of control charts for variables and attributes,

describe the pre-control for quality control, and

know Taguchi's loss function and signal to noise ratio.

6.2 STATISTICAL PROCESS CONTROL

Statistical Process Control (SPC) is concerned with establishing standards, monitoring standards, making measurements and taking corrective action as a product is being produced. A11 processes are subject to variation and the key to achieve quality is keeping variation under control. SPC invoIves using statistical techniques to measure and analyze the variation in the processes. Most often used for manufacturing processes, SPC is used to monitor product quality and show progress in relation to the aims and targets. SPC can ensure that the product is being manufactured as designed and intended. Thus, SPC will not improve a poorly designed product but can be used to maintain the conformance of the product to its specifications.

Often the term "statistical quality control" is used interchangeably with "statistical process control". However, statistical quality control includes acceptance sampling as well as statistical process control. Both SPC and sampling plans utilize the connection between properties of a sample and properties of parent population. Acceptance sampling relies on accepting or rejecting a sample based on the number of defectives found and thus is chance based. SPC on the other hand provides the opportunity of correcting or tuning the production process in time to avoid batches being rejected later.

The basis of SPC is a measurement of performance over a fixed time. Samples of process outputs are edknined in fixed intervals ahd real change in a process can be identified. If the samples are within acceptable limits, the process is permitted to continue. In such a situation, the process is said to be in statistical control. If the samples fall outside the limits, the process is stopped and the cause is located and removed.

The simplest way of describing the performance of a process is by means of control charts. The patterns of data in a control chart enable to distinguish between common and special causes. In the subsequent sections, you shall come to know of constructing control charts for various data, and of rules for recognizing the existence of special causes of variation. Before that it is essential to know about assignable and random causes.

6.3 ASSIGNABLE AND RANDOM CAUSES

Walter Shewhart recognized in the 1920s that all processes are subject to a certain degree of variability. Some processes display controlled variation while others display uncontrolled variation. A controlled variation is one which displays consistent pattern of

Page 3: Control charts and Taguchi methods

variation. The causes of this variation are inherent in the process and occur by chance. On-line and Off-line

They are called.random or common causes. The variations due to such causes are natural Quality Control

and cannot be eliminated from the process. On the other hand, an uncontrolled variation displays patterns that change over time. The cause for such varying patterns can be assigned to special causes. Hence, such causes are called assignable causes. A variation caused by an assignable cause can be traced to a definite source. Both assignable and random causes affect the process quality, which in turn affects a product or service. The concepts of assignable and random causes are illustrated through an example explained below.

Figure 6.1 shows the delivery time (in minutes) taken by a courier service agency operating in a city. The time between two consecutive trips varies throughout. However certain trips require unusually less time (delivery by helicopter) or high time (tyre puncture). Thus, the events like accident on motorway, stopped for speeding, express delivery etc. are special or assignable causes. Otherwise, there is very litt1.e difference between the time between two trips. These smaller variations can be traced to random causes such as waiting in traffic or delay in starting of delivery van. There are countless minor factors that cause random variation.

Coobceutive trips

Figure 6.1 : ~ssi~nable;nd Random Causes

The purpose of SPC is to bring the process into a state of statistical control and improve its performance by assessing the consequence of variation caused due to assignable or random causes. There are two ways to improve a process:

(i) If a process is under controlled variation, i.e. variation due to random causes is inherent then unless the process is changed it will continue to operate this way in the future. Hence the process must be changed.

(ii) If a process is affected by factors outside the process then the variations introduce instability into the process and it becomes uncontrollable, i.e. the performance of service will fluctuate and become unpredictable. Since the causes of these variations are assignable, these causes once identified should be eliminated if bad and incorporated into process if good.

Thus, it is essential to monitor the variation of a process throughout. The control charts are the most widely used tools to monitor a process. The control charts are based on the principles of normal distribution. As such, it is important for every process engineer to have a firm grasp of what a normal distribution is. In the next section, you shall come to know of normal distribution in the context of SPC.

6.4 NORMAL DISTRIBUTION AND CENTRAL LIMIT THEOREM

The normal distribution is probably the most recognized and most widely used statistical distribution. The reason for this is that many physical, biological, and social parameters

1

Page 4: Control charts and Taguchi methods

Quality Tools- Sbtiatical obey the normal distribution. Such parameters are then said to behave 'normally' or, more simply, are said to be 'normal'. Only two parameters are needed to describe a normal distribution, namely, the mean or its center, and the standard deviation (also known as sigma) or its variability. The normal distribution is bell-shaped, i.e., it peaks at the center and tapers off outwardly while remaining symmetrical with respect to the center, as shown in Figure 6.2.

The shapes of the normal curve depend on the value of the standard deviation (0). The notable feature about normal distributions is that regardless of their standard deviation value, the percentage of data falling under a given number of standard deviations is constant. For example, if the standard deviation of process 1 is 10, and the standard deviation of process 2 is 20, process 1 and 2 will have different data distribution shapes (Process 1 being more stable because it has less standard deviation). But for both processes, 68% of the data under the normal curve will fall within + 1 CJ from the mean of the distribution and 32% of the data will be outside it.

mean Figure 6.2 : The Normal Distribution Curve

The Central Limit Theorem

The Central Limit Theorem states that irrespective of the distribution of the individual readings, the sample means are normally distributed with the same mean but narrower standard deviations than that of individual readings. To illustrate this concept, consider the situation of selecting a sample of n items in random fashion from a population. Let the mean of this sample be pl. Now, if we take a second random sample and found the means of this new sample (say p ~ ) then it is highly likely that the two sample means will have a different values; in other words it is more probable that pl > pz or pl< pz. This may seem wonying because as we take more samples there will be more means (p3, p . . . etc.). But the Central Limit Theorem provides a helping hand. It says that if larger samples are taken then it is more likely that the sample means will be closer to the actual population mean (p), i.e. any two sample means may be different but they should still be close to the true mean.

The theorem also states what distribution the sample means will have. The distribution of sample means from any population tends to be a normal distribution as the sample size tends to infinity. The normal distribution has a variability given by standard deviation (0). The variability in the sample distribution is given by standard error (a,) and related to o by the relation

where, n is sample size. This variability in a sample distribution is best illustrated through a histogram. A histogram contains vertical bars of the sample statistic (say, mean). For example consider the means from a population of 10,000 items with a = 2.897, as shown in Figure 6.3(a). Now we randomly chose samples of size 2 and calculate the means. The resulting sample distribution is shown in

2'897 2.048 . If we further increase Figure 6.3@). The standard error becomes - - JZ - the size of individual samples to 10 and 50 the standard error further reduces to

Page 5: Control charts and Taguchi methods

2.897 -- 2.897

f i - 0.917 and - - Jso- 0.41 . Thus, the variance of the sample distribution

will be smaller than that of population (o ). Figure 6.3(c) and (d) show the sampling distribution of means for sample sizes 10 and 50 respectively. It is clear that as the sample size increases the sampling distribution approximates the normal distribution curve and the variability decreases. Moreover, the reduction in variability is independent of the population; it depends only on the sample size.

(a) Histogram o f Population (b) n = 2

(c) n = 10 (d) n = 50

Figure 6.3: Distribution of Population and Sampling Distribution with various Sizes

The unique features of this theorem are as follows :

(i) The immediate advantage of central limit theorem is that there is no requirement to develop a separate statistical model for every non-normal distribution data.

(ii) The distribution will approximate normal distribution even though the parent population from which sample is chosen is not.

(iii) Instead of taking the mean of entire population, which is a cumbersome process one can observe the means of n samples and determine mean of entire population.

SAQ 1

(a) The means of Grade Point Average (GPA) at a particular school is 2.89 with a standard deviation a= 0.63. A random sample of 25 students is collected. Find the probability that the mean GPA for this sample is greater than 3.0.

(b) The average weights of 1000 iron rods are found to have a standard deviation of 1.024. Calculate. the standard error in sample means if the sample size is 10, 50 and 100.

6.5 CONTROL CHARTS FOR VARIABLES

The development of control charts .by Shewhart enables one to control and stabilize the process at desired performance level and thus to bring the process under statistical

On-line and Off-line Quality Control

Page 6: Control charts and Taguchi methods

Quality Tools - Statistical control. A control chart consists of a central line corresponding to the desired standard of the level at which the process is to be performed and lines corresponding to the lower control limit (LCL) and upper control limit (UCL). These limits are chosen such that values falling between them can be attributed to chance variation while values falling beyond them are attributed to lack of statistical control. Thus, points outside the limits will signal that something is wrong - an assignable cause. The assumption underlying control charts is that the measurement-function (e.g. the mean), that is used to monitor the process parameter, is distributed according to a normal distribution. This is evident from central limit theorem. The properties of a normal distribution hold for the sample parameter also. In particular, 99.73% of sample means fall within f 3 standard deviations (o) measured outwards from the central value. These + 3 o lines define the control limits in the control chart. In other words, the mean of a random sample of size n will lie between the limits

The reason for adopting the 30 limits as control limits is that it represents the natural variation in the process. It is clear that as sample size n increases, the UCL and LCL moves closer to the centerline, making the control charts more sensitive to shifts in the mean.

There are two types of control charts corresponding to measurements. In one, the observations are measured quantities and in other, the count of number of defectives in a sample. The former is called control charts for variables, because there are variables that can be measured physically. The latter is termed control charts for attributes. In this section, we shall see how the control charts for variables are used for statistical process control. Two types of quantities viz. the average quality of a process and its variability should be controlled while dealing with measurements. To control the average quality, the means of periodic samples should be plotted on the control chart for means, denoted

by (X-bar) chart. The variability is controlled by plotting the sample range on a R-chart or standard deviation on an s-chart.

6.5.1 The X-bar and R-charts

The following procedure should be applied for generating control charts for variable data.

(i) Record the measurement of k samples of size n. Typically n is 5 and k is at least 20.

(ii) For each of the k samples, record the mean Fi

(iii) Find the range of each sample. R, = Largest - Smallest value of each sample, i = 1 . . . .k.

- (iv) Calculate the ove[all average X and the average range as :

Page 7: Control charts and Taguchi methods

(v) Calculate the lower and upper control limits (LCL and UCL) for X as : On-line and Off-line Quality Control

LCL = F - A, R . . . (6.6)

The values of A2 are dependent on the sample size n and are given in Table.6.1.

(vi) Plot the X - chart with LCL, UCL, centerline and subgroup means (Ti ). -

This centerline has a magnitude of the overall average X . This line represents the desired average.

(vii) Calculate the lower and upper control limits (LCL and UCL) for R as :

The values of 4 and D4 are dependent on the sample size n and are given in Table 6.1.

(viii) Plot the R-chart with LCL, UCL, centerline and subgroup range (Ri). This

centerline has a magnitude of average range R . This line represents a reliable estimate of the range.

(ix) The process is said to be in statistical control if all subgroup means ( Ti ) and ranges (Ri) fall within the respective control limits. Finally, the process standard deviation can be estimated using :

- R 6 = - (d2 obtained from Table 6.1) . . . (6.10) d2

Table 6.1 : Coefficients for Variables Charts (X-bar and R-charts)

Suppose we want to construct the X-bar and R-charts for 20 sets of diameter of a piston head. The data are collected from three different subgroups. The averages of the three subgroups are :

Page 8: Control charts and Taguchi methods

Quality Tools - Statistical The ranges are obtained as :

0.0004,0.0005,0.0007,0.0007,0.0001,0.0004,0,0.0001, 0.0006, 0.0006, 0.0003,0.0007, 0.0003,0.0004, 0.0007,0.0010,0.0006, 0.0001,0.0004, 0.0008.

- From Eqs. (6.4) and (6.5), X = 2.0000 and R = 0.0005. From Table 5.1, corresponding to n = 3, we have A2 = 1.023, D3 = 0 and D4 = 2.575. Hence, the X-bar chart limits are obtained from Eqs. (6.6) and (6.7) as :

- LCL = X - A2 R = 2.0000 - 1.023x0.0005 = 1.99948

- UCL = X+ A2 x = 2.0000 +1.023x0.0005 = 2.0005.

- The X-bar chart is constructed with centerline X = 2.0000 as shown in Figure 6.4.

The R-bar chart limits are obtained from Eqs (6.8) and (6.9) as

UCL = D~ R = 2.575 x 0.0005 = 0.001 287.

The R-bar chart is constructed with centerline R = 0.0005 as shown in Figure 6.5.

Figure 6.4 : X-bar Chart

0.0014 UCL 1 0.0008 -

0.0002 - I f ! l LCL I

I 0 5 15 lo Subgmup number

20 25 1

Figure 6.5 : R-chart

6.5.2 The X-bar and s-charts

The s-chart is a plot of the standard deviation of a process taken at regular intervals. The standard deviation is a measure of the variability of a process. So, the plot indicates whether there is any systematic change in the process variability. The sample standard deviation is a more efficient indicator of process variability, especially with larger sample sizes. The standard deviation can be calculated as follows

Page 9: Control charts and Taguchi methods

The lower and upper control limits (LCL and UCL) for 1 - chart are calculated as : On-line and Off-line Quality Control -

LCL = X - A3S . . . (6.12)

where,

The control limits for s-charts are given by :

LCL = B3 S

UCL = B4 S . . . (6.16)

The values of As, B3 and Bq are dependent on the sample size n and are given in Table 6.2.

Table 6.2 : Coefficients for Variables Charts (X-bar and s-charts)

6.5.3 Moving MeanIRange Charts A moving mean chart is formed by calculating the means of a characteristic over a specified number of periods. A pseudo sample is created that contains some new points and some old points. For example, a pseudo sample with size 3 consists of 1 new and two old values. After a cycle is completed, the oldest value in the pseudo sample is discarded and a new value picked up to calculate the mean. If the cyclical nature of the process is upset, then the new points added will be substantially different, causing out of control points. The sample size is determined by the time elapsed between two readings. If the time is small then size should be large and vice-versa.

The calculation of moving range chart is similar. Once the moving mean and ranges are obtained, the upper and lower control limits for the mean and range can be calculated as usual. The moving mean or range charts are generally used for detecting small shifts in the process mean. They will detect shifts of 0.5 a to 20 faster than Shewhart charts with the same sample size. They are, however, slower in detecting large shifts in the process mean. Moving mean charts may also be preferred when the subgroups are of size n = l . The moving mean chart is best used for continuous processes where only one value of the quality characteristic is available. Many chemical processes such as production of oils, paints etc. fit into this categorization.

Consider the values of closing prices in a stock market of XYZ company as shown in Table 6.3. The second column in the table shows the closing prices that are recorded at

Page 10: Control charts and Taguchi methods

Quality Tools-Strtistiul end of day. If one wants to calculate a 10-day simple moving mean, day 10 is the first day possible to calculate such a value. Thus the first10 day mean is

Table 6.3 : Daily Prices and Moving Mean

Day Dally 10-day close moving

mean

Figure 6.6 : Plotting the Individual Points and Moving Average

The averaging process then moves on to the next day where the 10-day moving mean for day 11 is calculated by adding the prices of day 2 through day 11 and dividing by 10. The third column in Table 6.3 tabulates the moving mean from days 10 to day 20. Once this tabulation is complete these points are plotted versus time as shown in Figure 6.6. Generally the moving mean plot is smoother than the plot of individual points. In Figure 6.6 the upper smooth line corresponds to moving mean whereas the lower line corresponds to individual points. This figure shows that all moving means are lagging indicators and will always be behind the price. The price of XYZ company is trending down, but the moving mean, which is based on the previous 10 days of data, remains above the price. If the price were rising, the moving mean would be below.

6.6 CONTROL CHARTS FOR ATTRIBUTES

The control charts for attributes are essential for those quality characteristics that cannot be measured physically. But such characteristics can be classified into one or two classes, usually defectives or non-defectives. For example, the presence of holes in an aluminum sheet classifies the item as a defective. When the quality characteristics of an '

item is measured by attribute, a control chart based on the fraction defective can be used. For example, in a process running with a 2% defect rate, a mean of 2 defects per sample requires a sample of at least 100 for recording purposes. The calculation of control limits for attributes depends on two factors.

Page 11: Control charts and Taguchi methods

(a) Whether numbers (for constant size) or proportions (for varying sample sizes) are being plotted.

(b) Whether defective items or defects are being considered.

An item is considered to be defective if it fails to meet a required standard due to presence of defects. Thus there are two types of control charts for defective items based on whether the sample size is varied (p-chart) or constant (np-chart). Also based on the presence of defects charts are constructed for varying sample size (u-charts) and constant sample size (c-charts). In this section, the construction of above mentioned charts and their utilization in SPC shall be discussed.

Thep-chart is used for process control of defectives when it is not possible to take a sample of constant sample size because the sizes of samples are varying with time. Typical examples include arrival of mails in post-office without the pin code number, scratches inglass sheet, holes in a casting, etc. In such situations, the control limits are drawn for the average fraction defective ( jj ). Before establishing the control limits, it is essential to have at least data for 25 subgroups. The first step in setting this control chart is calculation of fraction defective for each subgroup. This is given as :

Number of defectives in each group . . .(6.17) = Number inspected in subgroup

This enables to calculate the average fraction defective

- Total number of defectives during period = Total number of items inspected during period

The lower and upper control limits forp-chart are given as

Here, n̂ is the average of all sample sizes that are being inspected during a given period. If the calculation of lower control limit yields a negative quantity then LCL is treated as zero because a negative limit on fraction defective does not make sense. Thep-chart is now constructed with central line p and each subgroup fraction defective (p) plotted to see if thgy are lying within the LCL and UCL. Those fraction defectives lying outside the control limits are eliminated and the value of jj and control limits recalculated. For

\,

example, consider the number of defectives found in 20 samples as shown in Table 6.4. The sample size is 50.

Table 6.4 : Defectives (d) Found in 20 Samples

First the fraction defective in each sample or subgroup is found from Eq. (6.17). The first fraction defective is 2/50 = 0.04. The other values are :

The total number of defectives in these 20 samples is 40, i.e. the sum of all values in second row of Table 6.4. Hence,

On-line and Off-line Quality Control

- Total number of defectives during period -- - 40 = 0.04 = Total number of items inspected during period 20 x 50

Page 12: Control charts and Taguchi methods

Quality Tools - Statistical L c L = 0.04 - 3 Oeo4 (' - 0'04) = - 0.04 - 0 and U=L = 0.124. / 50

It is observed that the fraction defective of 17' point is 0.14 and greater than UCL. 40 - 7 Hence this subgroup having 7 defectives is discarded. Hence, p = ------ = 0.0347 19 x 50

Thus the new limits are LCL = 0.0347 - 3 0v0347 - 0'0347) = - 0.430 - 0 and 50

UCL = 0.1 12.

6.6.2 The np-chart

The np-chart is similar to p-chart, the main difference being that the sample size is constant. The data values plotted are the actual number of defectives per sample or subgroup denoted by np. The control limits are given by

- $- LCL = np - 3 . . . (6.21)

wherz Eji is the grand average of all defectives

- Total number of defectives np =

Number of samples inspected

6.6.3 The u-chart

The u-chart is used for process control for defects when it is not possible to take sample of constant sample size. This chart puts a check on the proportions of defects per sample. Hence unlike p-chart it deals with the number of nonconformities per item within the sample. For example, the number of holes in a casting. If there are more than specified number of holes in the cast product the product is termed unfit. The control limits are given by

where i i is the average of all sample sizes and -is given by

ii= Total number of defects Total number of itmes inspected

6.6.4 The c-chart

The c-chart is used for process control of defects when it is possible to take samples at a constant sample size. The data plotted on the chart are the number of defects c in each sample. The control limits are given by

Page 13: Control charts and Taguchi methods

The average number of defects c is calculated by On-line and Off-line Quality Control

c = Total number of defects . . . (6.29) Total number of samples inspected

6.7 CUSUM CHART

A CUSUM chart is a control chart for variables data that plots the cumulative sum of the deviations from a target over a definite interval. A CUSUM chart requires only a pre specified target value and a sequence of readings. These readings could be individual values or sample statistics (mean, range). The deviations of these readings from the target are added cumulatively and the sums are plotted sequentially. When these sums are plotted against time, a CUSUM line is generated which gives information about the trend in the process parameter. If the process is running on target the line is horizontal whereas the presence of a slope indicates a change in the mean level of performance. These charts are ideal for detecting small shifts (0.5 sigma to 2 sigma) in the process mean. They detect shifts in about half the time of Shewhart charts with the same sample size.

The CUSUM chart enables to identify that point at which shifts occur by determining the slope. The slope is the ratio of vertical distance on CUSUM axis to the horizontal distance between the sample intervals. Thus, in Figure 6.7(a) the slope at point A is 0.214 = 0.05 and at point B it is - 0.1812 = - 0.09. The interval is chosen such that any genuine shifts in the process level can be detected easily. If the measured parameter is process mean then by central limit theorem any deviation in the process level is given by - -

where and d2 are obtained from Table 6.1 and n is the sample sire. Twice this d , 6 value is taken as the interval in measuring slope in the CUSUM chart. A slope guide is another tool for comparison of the slope at any point in the CUSUM chart. It uses a series of predetermined slopes as shown in Figure 6.7(b).

CUSUM 811s 0 5

0 4

0 3 TIOM-T+[LOS

(a) Measuring the Slope in CUSUM Chart (b) Slope Guide

Figure 6.7

The V-mask is a template in the shape of a truncated V as shown in Figure 6.8 that enables determinath of significant slopes. The arms of a V-mask are called the decision lines that can vary in steepness and is denoted by H. The width of nose can also vary. H denotes half of this width. The values of F and Hare determined on the basis of number of points required to detect a change in process level. This number is called average run length (ARL) and represents the degree of sensitivity required. For example, ARL = 7 means that the sensitivity of mask is high enough to detect a change in the process level in 7 sample points on average. This mask is placed on the CUSUM chart, horizontally with the notch of the mask on the last reading of a time sequence on the CUSUM line. If

Page 14: Control charts and Taguchi methods

Quality Tools - Statistical any of these points lie outside the arms of V-mask, then a significant shift in the process level has occurred.

Figure 6.8 : The V-mask

6.8 PRE-CONTROL FOR QUALITY CONTROL

The concepts of pre-control was conceived by US Consulting group Rath and Strong. It is a heavily tolerance based system and extremely simple. It is generally used for comparing a product made against tolerance limits. The basic assumption in the use of pre-control charting in SPC is that the process is capable of meeting specifications as demanded by the product. Thus the specification serves as the control limits. As shown in Figure 6.9, the middle half (3) of the pre-control chart is defined as the green zone. The two areas outside the green zone and within specification limits (2 and 4) are called yellow zones. The two areas ( 1 and 5) beyond the specification limit are called the red zones.

Figure 6.9 : Pre-control Chart

To make use of a pre-control chart a sample of size 5 consecutive units are taken from the process. If all 5 fall within the green zone, the process is in control, and full production can commence. If even one of the units falls outside the green zone some variability is present and needs to be investigated as well as eliminated before full production starts. Once full production starts, take a sample of 2 consecutive units from the process periodically. If even one of the units falls in the red zone or if both units fall in yellow zones production must stop. The cause of this variation must be eliminated and the process capability must- be determined again.

The objective of preparing a pre-control chart is to identify the process dapability of a process. Process capability is necessary to assess if the customer's expectations can be met. The capability is expressed in the form of a relationship between specified tolerance and process variability. The specified tolerance is the deviation allowed from the target. The process capability can be defined as the ratio of specification width to process width. If the process mean is varying within + 3 0 limits the process has a width of 60. The specification width is the difference of upper and lower specification limit.

Page 15: Control charts and Taguchi methods

C, = USL - LSL . . . (6.30)

60 -

- CPA = USL - ' when R is above nominal . . . (6.31)

30

F - LSL - Cpk = when X is below nominal . . . (6.32)

30

I For example, if the mean of a process is 0.738 with a standard deviation o = 0.725 and

I 0.9 - 0.5 USL and LSL respectively equal to 0.9 and 0.5 then C p = = 0.092 .

6 x 0.725

6.9 OFFLINE QUALITY CONTROL

Offline quality control refers to the techniques used before the actual process of production starts. The uniqueness of this method is that it builds quality into the product/process as early as the design stage. The application of quality control methods in the early stages of product development produces greater influence on the improvement, and hence reduces the cost and time. This approach eliminates the need for mass inspection that is a wasteful activity that is a feature of online quality control methods. Offline quality control works in tandem with on line quality control. In the manufacturing industry, the online quality control methods involve techniques that are used in the shop floor such as real-time methods for monitoring and maintaining quality in production. However, offline quality control use small-scale experiments to reduce variability and find cost-effective, robust designs for large-scale production and the marketplace. Hence this method is also called Robust Engineering.

When offline quality control methods are incorporated in areas such as Research and Development, the benefits are applicable to a family of present and future products and processes which is making use of the research.

6.10 TAGUCHI METHOD

Taguchi methods were developed by Dr. Genichi Taguchi after World War 11. In sharp contrast with Statistical Quality Methods in which the process quality is being monitored on line, i.e. during production, Taguchi's offline methods focus on design and attempt to determine the best combination of design parameters which result in superior performance of the product. Taguchi calls his design approach aparametric design approach. It involves choosing such a combination of design parameters that maximize the performance of the product. This stage is often neglected in industrial design practice.

6.10.1 Loss Function Taguchi's concept of loss function is focused on the loss faced by a customer. According to Taguchi, the customer suffers from loss not only when product characteristics are outside specification but whenever it deviates from its target value. Out of specification is the common measure of quality loss. It indicates that products that meet specifications are good otherwise not acceptable. However, the customer does not always appreciate this concept. For him, the product that meets specification is as good as the product that is barely out of specification. For example, if one has to choose 1000 steel tubes of 20 mm it is important that the size of tubes are normally distributed with target centered on 20 mm, though some sizes may cross the specified limits.

The loss function combines the cost, target and variation into one entity. The losses include failure to meet customer requirements. Taguchi developed many loss functions among which the quadratic function is the most common. This loss function is also called nominal-the-best. The loss (L) when the performance ( Y ) of a product deviates from target (r) is

On-line and Off-line Quality Control

Page 16: Control charts and Taguchi methods

Quality Tools - Statistical

where

where A is the specified tolerance and A is the loss to the customer. As illustrated in Figure 6.10, the loss to customer is proportional to the square of the deviation of a parameter from its target value and the least value of loss is zero when performance characteristics matches with the target. Hence, this loss function is called nominal-the best.

Figure 6.10 : Taguchi's Loss Function

There are two other loss functions that are quite common. One is the Smaller-the-better and the other Larger-the-better which is given by

- where

As seen in Figure 6.1 l(a) the target value for smaller-the-better is zero and there are no negative values for the performance characteristics. Examples of performance characteristic are pollution from an automobile, out of round for a hole, etc. The third loss function is larger-the-better. This functions becomes applicable in cases such as mileage, strength of a weld, etc. Figure 6.11(b) shows that zero loss value is obtained for a target value of infinity. The loss function is given by

where k = A Y ~

Loss

b + 0 Y

Performance Characteristic Performance Characteristic

(a)-Smaller-the-better (b) Larger-the-better

Figure 6.11

Page 17: Control charts and Taguchi methods

6.10.2 Parameter Design

Parameter design by Taguchi method is an off-line quality control method to make a process robust against sources of variation and hence improve performance. Taguchi recognized that factors affecting the product's functional characteristics can be grouped into control factors and noise (or uncontrollable) factors. Control factors are known factors (or variables) and can be easily controlled by the designer. For example, in a plant, the process engineer can easily adjust variables like pressure and temperature to produce a chemical.

On the other hand noise factors are variables such as environmental variables, deterioration, and manufacturing variations that are difficult and expensive to control. If noise factors are eliminated then the process can always be maintained at nominal value. However, Taguchi proposed that instead of finding and eliminating noise factors, the impact of noise factors should be reduced. The designer therefore has to identify the components of a design that most influence the desired outcome of the design process. This is a very cost effective technique to improve product quality. This can be realized by selecting the optimal combination of factor levels that make the process less sensitive to the effects of noise factors.

Taguchi's method for identifying settings of design parameters that maximize performance characteristics (e.g. yield or productivity etc.) is summarized below.

(i) Identify initial settings of design factors (parameters), and identify important noise factors and their ranges.

(ii) Construct design and noise matrices, and plan the parameter design experiments. Two levels of each design and noise factor are used to construct the matrices. The usual rule is to set one level as low, coded as 1 and the other as high, coded as 2. The procedure is to alter each design parameter and noise factor to its highest and lowest selected value and obtain the result of each combination. The set of combination of factors that gives the best result is selected as the design parameters of the final product. For example, consider that 3 factors in a'process can be controlled. This gives rise to 8 possible combinations. For the purpose of experimentation, consider that the noise factors can be controlled and changed. With two noise factors a total of 4 combinations are constructed.

Design Matrix I Thus a total of 8 x 4 = 32 different experiments can be conducted. These 32 experiments allow a designer to evaluate the effect of noise factors on each controllable factor setting and determine the one setting that minimizes the variability.

Conduct the parameter design experiments and obtain an observation at each setting. The next step is to evaluate the observation (or performance of system) against some performance measures.

6.10.2 Performance Measure

The signal to noise (SIN) ratio is an ideal metric for the purpose of performance measurement. The signal represents the amount of energy for an intended function and the noise represents the amount of energy wasted. The signal factors (such as mean 7 of any characteristic) are set by designer to obtain the intended value of the response

On-line and Off-line Quality Control

Page 18: Control charts and Taguchi methods

Quality Tools - Statistical variable. Noise factors are not under control or expensive to control. The variance (s2) is used to measure ndise. For a loss function of nominal-the-best type the signal ratio is given by

where, n is the number of observations. The unit of measurement of SIN ratio is decibel (dB).

The SIN for a loss function of smaller-the-better type is

The SIN for a loss function of larger-the-better type is

Consider the following example. A food-packing producer is comparing the calorie content of the original process with a new process. Which process has the lower content and what is the difference? The results are:

Original 130 135 128 127

Light 115 112 120 113

The SIN ratio of smaller-the-better should be used to compare the calorie content. For original process

and new process

S - = - l o log,, 1 1 5 ~ + 1 1 2 ~ + 1202 + 1 1 3 ~ 1 = - 41.22 dB

N 4

The difference is I - 41.22 - (- 42.28) ( = 1.06 dB. This indicates that the new process packs lesser calories.

6.10.4 Taguchi's Tolerance Design

Tolerance design is the selective tightening of tolerances to eliminate excessive variation. It uses the analysis of variance (ANOVA) to determine which factors contribute to the total variability and the loss function to trade-off between quality and cost. The method of determining the percent contribution of factors to the total variability is illustrated via an example.

ABC Industries produces iron tube by casting process. However, the castings require extra grinding to machine off extra material. Seven factors were found to influence the cast.

(i) Sand compactness

(ii) Iron temperature

(iii) Clay addition

Page 19: Control charts and Taguchi methods

(iv) Mold hardness

(v) Mulling time

(vi) Seacoal addition

(vii) Sand addition

In order to determine which factors contributed most, two levels of each factor was considered for experimentation. 8 treatment conditions (TC) were taken as shown in Table 6.5 and the percentage of castings that required finish grinding are shown in last column, i.e, for TC 1 with all the seven factors at level 1, 89% of castings required grinding.

Table 6.5 : Treatment Condition for Iron Castings

Step 1 The sum of the squares of each factors are calculated by

- - Squares of sum of first level response Squares of sum of second level response + Number of first level Number of second level

- Squares of sum of all responses Total number of all levels

ss, = (89 + 55 + 83 + 16)' + (38 + 44 + 66 + 55)'

4 4

On-line and Off-line Quality Control

Page 20: Control charts and Taguchi methods

Quality TOOIS - Statistical Similarly, the sum of squares for other factors are obtained as SSc = 882, SSD=1404.5, SSE= 312.5, SSF= 1152, SSG = 32.

Step 2

Calculate the mean square for each factor as

where, df = degrees of freedom = number of levels - 1 = 2 - 1 = 1.

Step 3

Tabulate the SS and MS in the ANOVA table as shown in Table 6.6.

Step 4

Calculate Fusing the pooling technique. The pooling up procedure is to test the factor with the smallest SS against the next largest. If not significant, the SS is pooled and tested against the next largest target. The process is continued until a factor is significant or one half the total number of degrees of freedom. The calculation at first step is done with A and G.

SSG 32 -- - df = = 1 = 7 . 1 1 F=-

SS, 4.5 MS, - -

This value is compared against the F value obtained from Table 6.7. The F value is obtained corresponding to the degree of freedom of numerator and denominator. Thus the critical value of F for A and G is 161. Hence, factor A is not significant. Now the next largest target is B. To calculate F, the last two factors are pooled.

From F-table the critical value for 1 and 2 degree of freedom is 18.5, which is greater than calculated value, so factor B is also not significant. Factor B when pooled with A and G gives

However, since half the total degrees of freedom (i.e. 7) has been used, hence the pooling process is over. The F value for remaining factors C, D, E and F are obtained by dividing the MS of each factor by the last pooled MS (of factors A, G and B). Hence,

similarly, FD = 17.8, FE = 4 and FF = 14.6. Since the critical valuecorresponding to 1 and 3 degree of freedom is 10.1, hence.factors C, D and F are significant at 95% confidence and E is not.

Step 5

Calculate S S or pure SS by subtracting the pooled error from the SS. For factor C,

SS; = SSc -(MSA+G+B x dfC) = 882 - 78.8 (1) = 803.2

Page 21: Control charts and Taguchi methods

Step 6

Calculate percent contribution by dividing the SS' of each factor by the total SF. From Table 6.6, it is observed that factor D contributes the most. The contribution of pooled factors is 13.8%, which is satisfactory. If the error had been around 40% it would indicate the omission of important factors or inadequate measurement.

Table 6.6 : ANOVA Table

Table 6.7 : F-table for Significance = 0.05 (Confidence = 95%) for Small Samples

Pooled

( A + G + B )

Total

6.11 COMPARISON OF TAGUCHI AND DEMING APPROACH

The basic difference between Taguchi and Deming's philosophy on quality improvement lies in the way of representing loss. Deming acknowledged that variation in a process exists and cannot be eliminated completely due to presence of unknown factors. Taguchi wanted to find a useful way of representing the loss caused by these unknown factors with statistics. Hence, he proposed the loss function and showed that any item manufactured away from nominal would result in some loss to the customer or the wider community. These losses could come through early wear-out; difficulties in interfacing with other parts, etc. In other words, Taguchi stressed on the need to produce an outcome on target (for example, to machine a hole to a specified diameter).

3

7

The approach of Deming towards quality was focused on deriving maximum benefit with the existing resources. The 14 points of Deming set the basis for management for improvement of quality, productivity and competitive position. He advocated for a uniform policy on quality that must be practiced by all associated with an organization. Deming's teaching asks for coordinated efforts among the various layers of organization to fulfill one common objective, i.e. customer satisfaction. Taguchi, on the other hand argued, that quality engineering should start with elimination of variation from the

On-line and Off-line Quality Control

236.5

3987.5

78.8 551.7

3987.5

13.8

99.9

Page 22: Control charts and Taguchi methods

Quality Tools - Statistical design stage onwards. Hence, he considered the prospect of using design of experiments to understand the influence that parameters had on variation.

SAQ 2

(a) Determine the S l N ratio for a process that has a average temperature of 28°C and a sample standard deviation of 3°C for 5 observations.

(b) What are the differences of Deming and Taguchi in their approach towards quality improvement?

6.12 SUMMARY

In this unit, the concepts of Statistical Process control were introduced. The techniques of online and oMine quality were discussed in detail. The control chart is a very valuable tool in monitoring quality during production. Pre-control charts are useful for determining process capability. The ofline quality control finds its applicability in the design stage. Taguchi's experimental design method helps to study the effects of combination of factors that reduce the variability due to noise or uncontrollable factors. Any deviation of a quality characteristic from the target always generates a loss regardless of whether that characteristic is within specifications. Finally, the comparison between the philosophy of Deming and Taguchi was shown to highlight the innovations in quality improvement tools.

6.13 KEY WORDS

Assignable Causes : Causes in the production process that can be pinpointed.

Attribute : A quality or characteristic.

ANOVA : Analysis of variance (ANOVA) is used to determine which factors contribute to the total variability and the loss function.

Central Limit Theorem : Regardless of the distribution of the population, sample means tend to follow normal distribution as the sample size grows.

Control Charts

: A quality control chart used to control the quality in terms of number of defects in one unit of product.

: A graphic presentation of process data over time alongwith control limits.

Design of Experiments : A series of techniques involving the identification and control of parameters which have a potential effect on performance and reliability of a product design andlor the output of a process, with the objective of optimizing product design, process design and process operation, and limiting the effect of noise (uncontrollable causes).

Page 23: Control charts and Taguchi methods

' Loss Function

np-chart

: The loss function converts deviation from the target into a cost measure.

: A quality control chart used to control the attributes with constant sample size.

Offline Quality Control : Quality control techniques used before the actual process of production starts.

online Quality Control : Techniques for quality improvement during the course of production.

p-chart

R-chart

: A quality controlchart used to control the attributes with different sizes of sample.

: A quality control chart used to track the variation in the process range.

6.14- ANSWERS TO SAOS

On-line and Off-line Quality Control

SAQ 1

(a) With 25 students the distribution of GPA can be said as normal. Hence the probability that mean GPA is greater than 3.0 is evaluated using

From Statistical Table, we get the area corresponding to Z = 0.18 is 0.5714. Hence the probability is 0.5714.

o (b) The standard error for sample means is given by ox = -

J;; When n = 10, ox = 0.324

When n = 50, ox = 0.145

When n = 100, 6, = 0.1024.

SAQ 2

(a) $ = 10 log,, [[$I - (+)I = 10 log,, [($I - (+)I = 19.39 dB

(b) Answers to this question may be found in the text.