chapter iii - shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… ·...
TRANSCRIPT
![Page 1: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/1.jpg)
50
CHAPTER III
EXPERIMENTAL DESIGN AND ANALYSIS
Conducting experiments and drawing correct inferences from experimental
observations are two most important prerequisites of any scientific study aimed at
product or process development. Clarity of objectives and proper planning of
experiments is very crucial for assimilating accurate and meaningful conclusions
from the experimental results. Design of experiment is a very powerful tool for
accomplishing these objectives. Adopting robust design of experiments adds value
and reliability to the experimental results besides certain other advantages, like cost
cutting by reducing the number of experimental runs and trials; and determination
and reduction of experimental errors. It is therefore imperative upon researchers to
properly plan and conduct experiments to obtain adequate relevant data which
facilitates interpreting maximum knowledge from the experimental data thereby
developing better insight of the process or subject. The most widely used
experimental approaches are briefly discussed as under,
(1) Trial-and-Error/One-Factor at-a-Time Approach:
Performing a series of experiments arbitrarily or by varying one factor at-a-time
approach wherein each experiment gives some understanding of the basic
phenomena and the effect of individual parameters. This approach usually requires
very large number of experiments, is labour intensive and time consuming and may
be expensive, and at times does not depict the correct behaviour of the process
parameters, and does not give the interaction effect of parameters.
(2) Full Factorial Experiments:
A well planned set of experiments, in which all parameters of interest are varied over
a specified range, is a much better approach to obtain systematic data.
Mathematically speaking, such a complete set of experiments ought to give desired
results. Usually the number of experiments and resources (materials and time)
required are prohibitively large. The analysis is at times tedious and thus effects of
various parameters on the observed data may not be readily apparent. In many
cases, particularly those in which some optimization is required, the method does
not directly point to the BEST settings of parameters.
(3) TAGUCHI Method:
Taguchi of Nippon Telephones and Telegraph Company, Japan has developed a
method based on "ORTHOGONAL ARRAY" experiments which gives much reduced
"variance" for the experiment with "optimum settings" of control parameters. Thus
![Page 2: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/2.jpg)
51
the marriage of Design of Experiments with optimization of control parameters to
obtain BEST results is achieved in the Taguchi Method. "Orthogonal Arrays" (OA)
provide a set of well balanced (minimum) experiments and Taguchi's Signal-to-Noise
ratios (S/N), which are log functions of desired output, serve as objective functions
for optimization, help in data analysis and prediction of optimum results.
The TAGUCHI design of experiment together with the statistical techniques applied
for analysis and validation of results viz. Monte Carlo Simulation, Analytical Hierarchy
Process, Technique for Order Preference by Similarity to Ideal solution, Utility
Concept, Grey Relational Analysis and Response Surface Methodology Modelling
have been discussed in the following sections.
3.1 TAGUCHI EXPERIMENTAL DESIGN AND ANALYSIS
Taguchi’s comprehensive system of quality engineering is one of the greatest
milestones of the 20th century. His methods focus on the effective application of
engineering strategies rather than advanced statistical techniques. It includes both
upstream and shop-floor quality engineering. Upstream methods efficiently use
small-scale experiments to reduce variability and remain cost-effective, and robust
designs for large-scale production and market place. Shop-floor techniques provide
cost-based, real time methods for monitoring and maintaining quality in production.
The farther upstream a quality method is applied, the greater leverages it produces
on the improvement, and the more it reduces the cost and time. Taguchi’s
philosophy is founded on the following three very simple and fundamental concepts
[200,201]:
Quality should be designed into the product and not inspected into it.
Quality is best achieved by minimizing the deviations from the target. The
product or process should be so designed that it is immune to uncontrollable
environmental variables.
The cost of quality should be measured as a function of deviation from the
standard and the losses should be measured system-wide.
Taguchi proposes an “off-line” strategy for quality improvement as an alternative to
an attempt to inspect quality into a product on the production line. He observed that
poor quality cannot be improved by the process of inspection, screening and
salvaging. No amount of inspection can put quality back into the product. Taguchi
recommended a three-stage process: system design, parameter design and
tolerance design [200,201]. In the present work Taguchi’s parameter design
approach is used to study the effect of process parameters on the quality
characteristics.
![Page 3: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/3.jpg)
52
Taguchi recommended orthogonal array (OA) for laying out the design of
experiments. These OA’s are generalized Graeco-Latin squares. Designing an
experiment requires selection of the most suitable OA and assignment of parameters
and interactions of interest to the appropriate columns. The use of linear graphs and
triangular tables suggested by Taguchi makes the assignment of parameters simple
[201].
In the Taguchi method the results of the experiments are analyzed to achieve one or
more of the following objectives [200]:
To establish the best or the optimum condition for a product or process
To estimate the contribution of individual parameters and interactions
To estimate the response under the optimum condition
The optimum conditions are identified by studying the main effects of each of the
parameters. The main effects indicate the general trend of influence of each
parameter. The knowledge of contribution of individual parameter plays a key role in
deciding the nature of control to be established on a production process. The
analysis of variance (ANOVA) is the most commonly used statistical treatment
applied to the results obtained from the experiments in determining the significance
and percent contribution of each parameter against a stated level of confidence.
Study of ANOVA table for a given analysis helps to determine which of the
parameters need control [200].
Taguchi suggested two different routes to carry out the complete analysis, as
reported by Roy[201]. In the first approach, the results of a single run or the average
of repetitive runs are processed through main effect and ANOVA analysis (Raw data
analysis). The second approach is for multiple runs where signal- to- noise ratio (S/N)
is used. The S/N ratio is a concurrent quality metric linked to the loss function as
suggested by Barker [202]. By maximizing the S/N ratio, the loss associated with a
product or process can be minimized. The S/N ratio determines the most robust set
of operating conditions from variation within the results. The S/N ratio is treated as a
response parameter (transform of raw data). Taguchi recommended the use of outer
OA to force the noise variation into the experiment i.e. the noise is intentionally
introduced into experiment, as reported by Ross [200]. Generally, the processes are
subjected to many noise factors that in combination strongly influence the variation
of the response. Most often, it is sufficient to generate repetitions at each
experimental condition of the controllable parameters and analyze them using an
appropriate S/N ratio, as reported by Byrne and Taguchi [203].
![Page 4: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/4.jpg)
53
3.1.1 TAGUCHI LOSS FUNCTION
The heart of Taguchi method is his definition of the nebulous and elusive term
“quality‟ as the characteristic that avoids loss to the society from the time the
product is shipped, Braker [204]. Loss is measured in terms of monetary units and is
related to quantifiable product characteristics. Financial loss is united with the
functional specifications through a quadratic relationship that comes from a Taylor
series expansion, as reported by Roy [201].
(3.1)
Where
L = Loss in monetary units
m = value at which the characteristic should be set
y = actual value of the characteristic
k = constant depending on the magnitude of the characteristic and the monetary
unit involved
The following two observations can be made from figure 3.1 which represents the
difference between the traditional and the Taguchi loss function concept.
The farther the product’s characteristic from the target value, the greater is
the loss. The loss must be zero when the quality characteristic of a product
meets its target value.
The loss is a continuous function and not a sudden step as in the case of
traditional approach. This consequence of the continuous loss function
illustrates the point that merely making a product within the specification
limits does not necessarily mean that product is of good quality.
In a mass production process, the average loss per unit can be expressed by eqn.3.2
as given by Roy[239]:
(3.2)
where
y1, y2,….…,yn = Actual value of the characteristic for unit 1, 2,…n respectively
n = Number of units in a given sample
k = Constant depending on the magnitude of the characteristic and the monetary
unit involved
m = Target value at which the characteristic should be set
![Page 5: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/5.jpg)
54
The Eq. 4.2 can also be expressed as:
(3.3)
Where MSDNB = Mean squared deviation and represents the average of squares of all
deviations from the target or nominal value
NB = “Nominal is Best”
Figure 3.1(a, b): The Taguchi Loss-Function and The Traditional Approach [205]
![Page 6: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/6.jpg)
55
(a)
(b)
Figure 3.2(a, b): The Taguchi Loss-Function for LB and HB Characteristics, Barker[202]
![Page 7: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/7.jpg)
56
The loss-function can also be applied to product characteristics other than the
situation where the nominal value is the best value (m). The loss-function for a
“smaller is better‟ type of product characteristic (LB) is shown in Figure 4.2a. The loss
function is identical to the “nominal-is-best‟ type of situation when m=0, which is
the best value for “smaller is better‟ characteristic (no negative value). The loss
function for a “larger-is-better‟ type of product characteristic (HB) is also shown in
Figure 4.2b, where also m=03.1.2.
3.1.2 SIGNAL TO NOISE RATIO
Taguchi transformed the loss function into a concurrent statistic called S/N ratio,
which combines both the mean level of the quality characteristic and variance
around this mean into a single metric [202,205]. The S/N ratio consolidates several
repetitions (at least two data points are required) into one value. A high value of S/N
ratio indicates optimum value of quality with minimum variation.
The equation for calculating S/N ratios for “smaller is better‟ (LB); “larger is better‟
(HB); and “nominal is best‟ (NB) types of characteristics are as follows[203]:
1. Higher the Better:
(3.4)
2. Lower the Better:
(3.5)
![Page 8: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/8.jpg)
57
3. Nominal the Best
(3.6)
The mean squared deviation (MSD) is a statistical quantity that reflects the deviation
from the target value. The expressions for MSD are different for different quality
characteristics. For “nominal the better‟ type of characteristic, the standard
definition of MSD has been used. For the other two characteristics the definition is
slightly modified. For “Lower the better‟, type the target value is zero. For “Higher
the better‟ type, the inverse of each large value becomes a small value and again the
target value is zero. Thus for all the three expressions, the smallest magnitude of
MSD is being sought. The constant 10 has been purposely used to magnify S/N
number for easy analysis and negative sign is used to set S/N ratio of “higher the
better” relative to the square deviation of the “lower the better”.
3.1.3 TAGUCHI PROCEDURE FOR EXPERIMENTAL DESIGN AND ANALYSIS
The stepwise procedure for Taguchi experimental design and analysis, as illustrated
in the flow diagram shown in Figure 3.3, has been described in the following
sections.
3.1.3.1 SELECTION OF ORTHOGONAL ARRAY (OA)
In selecting an appropriate OA, the following pre-requisites as reported by Ross[200]
and Roy[201] are to be taken care of:
Selection of process parameters and/or interactions to be evaluated
Selection of number of levels for the selected parameters
The determination of parameters to be investigate hinges upon the product or
process performance characteristics or responses of interest. Several methods are
![Page 9: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/9.jpg)
58
Fig. 3.3 Taguchi Experimental Design and Analysis Flow Diagram
![Page 10: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/10.jpg)
59
suggested by Taguchi for determining which parameters to include in an experiment.
These include [200]:
a) Brainstorming
b) Flow charting
c) Cause-Effect diagrams
The total Degrees of Freedom (DOF) of an experiment is a direct function of total
number of trials. If the number of levels of a parameter increases, the DOF of the
parameter also increases because the DOF of a parameter is the number of levels
minus one. Thus, increasing the number of levels for a parameter increases the total
degrees of freedom in the experiment which in turn increases the total number of
trials. Thus, two levels for each parameter are recommended to minimize the size of
the experiment. However, if curved or higher order polynomial relationship between
the parameters under study and the response is expected, at least three levels for
each parameter should be considered [202]. The standard two level and three level
arrays reported by Taguchi and Wu[206] are:
Two Level Arrays: L4, L8, L12, L16, L32
Three Level Arrays: L9, L18, L27
The number as subscript in the array designation indicates the number of trials in
that array. The total degrees of freedom (DOF) available in an OA are equal to the
number of trials minus one :
(3.7)
where
fLN = Total degrees of freedom of an OA
LN = OA designation
N = Number of trials
When a particular OA is selected for an experiment, the following inequality must be
satisfied (Ross, 1988):
fLN ≥ Total degree of freedom required for parameters and interactions (3.9)
Depending on the number of levels of the parameters and total DOF required for the
experiment, a suitable OA is selected.
![Page 11: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/11.jpg)
60
3.1.3.2 ASSIGNMENT OF PARAMETERS AND INTERACTIONS TO OA
An OA has several columns to which various parameters and their interactions are
assigned. Taguchi has provided two tools to aid in the assignment of parameters and
their interactions in the columns of OA, viz. linear graphs and triangular tables.
Each OA has a particular set of linear graphs and a triangular table associated with it.
The linear graphs indicate various columns to which parameters may be assigned
and the columns subsequently evaluate the interaction of these parameters. The
triangular tables contain all the possible interactions between parameters (columns).
Using the linear graphs and /or the triangular table of the selected OA, the
parameters and interactions are assigned to the columns of the OA. The linear graph
of L27 OA is given in Figure C.1 (Appendix C).
3.1.3.3 SELECTION OF OUTER ARRAY
Taguchi separates factors (parameters) into two main groups: controllable factors
and uncontrollable factors (noise factors). Controllable factors are factors that can
easily be controlled. Noise factors, on the other hand, are nuisance variables that are
difficult, impossible, or expensive to control. The noise factors are responsible for
the performance variation of a process. Taguchi recommends the use of outer array
for the noise factors and inner arrays for controllable factors. If an outer array is
used, the noise variation is forced into the experiment. However, experiments
against the trial conditions of the inner array (the OA used for the controllable
factors) may be repeated and in this case the noise variation is unforced into the
experiment [203]. The outer array, if used, will have same assignment
considerations. However, the outer array should not be complex as the inner array
because the outer array is noise only which is controlled only in the experiment
[200]. An example of inner and outer array combination is shown in Table C.1
(Appendix C).
3.1.3.4 EXPERIMENTATION AND DATA COLLECTION
The experiment is performed against each of the trial conditions of the inner array.
Each experiment at a trial condition is repeated simply (if outer array is not used) or
according to the outer array (if used). Randomization should be carried to reduce
bias in the experiment. The data (raw data) are recorded against each trial condition
and S/N ratios of the repeated data points are calculated and recorded against each
trial condition.
3.1.3.5 ANALYZING EXPERIMENTAL DATA
A number of methods have been suggested by Taguchi for analyzing the data viz.
observation method, ranking method, column effect method, ANOVA, S/N ANOVA,
![Page 12: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/12.jpg)
61
plot of average response curves, interaction graphs etc. as reported by Ross[200].
However, in the present investigation the following methods have been used:
Plot of average response curves
Plot of S/N response graphs
ANOVA for S/N data
The plot of average responses at each level of a parameter indicates the trend. It is a
pictorial representation of the effect of parameter on the response. The change in
the response characteristic with the change in levels of a parameter can easily be
visualized from these curves. Typically, ANOVA for OA’s are conducted in the same
manner as other structured experiments.
The S/N ratio is treated as a response of the experiment, which is a measure of the
variation within a trial when noise factors are present. A standard ANOVA can be
conducted on S/N ratio which will identify the significant parameters (mean and
variation)[207].
3.1.3.6 PARAMETER CLASSIFICATION AND SELECTION OF OPTIMAL LEVELS
Average response curves and ANOVA of the SN ratio identifies the control factors,
which affect the average response and the variation in the response respectively.
The control factors are classified into four groups, Ross[200]:
Group I: Parameters which affect both average and variation
Group II: Parameters which affect variation only
Group III: Parameters which affect average only
Group IV: Parameters which affect nothing
The parameter design strategy is to select the suitable levels of group I and group II
parameters to reduce variation and group III parameters to adjust the average values
to the target value. The group IV parameters may be set at the most economical
levels.
3.1.3.7 PREDICTION OF MEANS
After determination of the optimum condition, the mean of the response (μ) at the
optimum condition is predicted. The mean is estimated only from the significant
parameters as identified by ANOVA. Suppose, parameters A and B are significant and
A2B2 (second level of A=A2, second level of B=B2) is the optimal treatment
condition. Then, the mean at the optimal condition (optimal value of the response
characteristic) is estimated, Ross[205]:
![Page 13: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/13.jpg)
62
(3.8)
It may so happen that the predicted combination of parameter levels (optimal
treatment condition) is identical to one of those in the experiment. If this situation
exists, then the most direct way to estimate the mean for that treatment condition is
to average out all the results for the trials which are set at those particular
levels[205].
3.1.3.8 DETERMINATION OF CONFIDENCE INTERVAL
The estimate of the mean (μ) is only a point estimate based on the average of results
obtained from the experiment. Statistically this provides a 50% chance of the true
average being greater than μ. It is therefore customary to represent the values of a
statistical parameter as a range within which it is likely to fall, for a given level of
confidence as reported by Ross[200]. This range is termed as the confidence interval
(CI). In other words, the confidence interval is a maximum and minimum value
between which the true average should fall at some stated confidence interval[200].
The following two types of confidence interval are suggested by Taguchi for
estimated mean of the optimal treatment condition.
1. Around the estimated average of a treatment condition predicted from the
experiment. This type of confidence interval is designated as CIPOP (confidence
interval for the population).
2. Around the estimated average of a treatment condition used in a confirmation
experiment to verify predictions. This type of confidence interval is designated as
CICE (confidence interval for a sample group).
The difference between CIPOP and CICE is that CIPOP is for the entire population i.e., all
parts ever made under the specified conditions, and CICE is for only a sample group
made under the specified conditions. Because of the smaller size (in confirmation
experiments) relative to entire population, CICE must slightly be wider. Ross [200]
and Roy [208] have given the expressions for computing the confidence intervals as
under.
![Page 14: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/14.jpg)
63
(3.9)
(3.10)
In Eq. 3.10, as R approaches infinity, i.e., the entire population, the value 1/R
approaches zero and CICE = CIPOP. As R approaches 1, the CICE becomes wider.
3.1.3.9 CONFIRMATION EXPERIMENTS
The confirmation experiment is a crucial step and is highly recommended to verify
the experimental conclusion. The suggested optimum levels are set for significant
parameters while the economic levels are selected for the insignificant parameters
and a selected number of test runs are conducted. The average values of the
responses obtained from confirmation experiments are compared with the predicted
values. The average values of the response characteristics obtained through the
confirmation experiments should lie within the 95% confidence interval, CICE.
However, these may or may not lie within 95% confidence interval, CIPOP [205].
3.2 MONTE CARLO SIMULATION BASICS
Monte Carlo method is a technique that involves using random numbers and
probability to solve problems. The term Monte Carlo Method was coined by S. Ulam
and Nicholas Metropolis[209] in reference to games of chance, a popular attraction
in Monte Carlo, Monaco. Hoffman[210]
Computer simulation has to do with using computer models to imitate real life or
make predictions. When you create a model with a spread sheet like Excel, you have
a certain number of input parameters and a few equations that use those inputs to
give you a set of outputs (or response variables)[211]. This type of model is usually
deterministic, meaning that you get the same results no matter how many times you
![Page 15: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/15.jpg)
64
re-calculate. Fig. 3.4 shows a deterministic model mapping a set of input variables to
a set of output variables.
Figure 3.4 A parametric deterministic model.
Monte Carlo simulation is a method for iteratively evaluating a deterministic model
using sets of random numbers as inputs. Weisstein [212] suggested employing this
method when the model is complex, nonlinear, or involves more than just a couple
of uncertain parameters. A simulation can typically involve over 10,000 evaluations
of the model, a task which in the past was only practical using super computers.
Coddington[213] elucidates that by using random inputs, we are essentially turning
the deterministic model into a stochastic model. The Monte Carlo method is just one
of many methods for analyzing uncertainty propagation, where the goal is to
determine how random variation, lack of knowledge, or error affects the sensitivity,
performance, or reliability of the system that is being modeled. Monte Carlo
simulation is categorized as a sampling method because the inputs are randomly
generated from probability distributions to simulate the process of sampling from an
actual population. So, we try to choose a distribution for the inputs that most closely
matches data we already have, or best represents our current state of knowledge.
The data generated from the simulation can be represented as probability
distributions (or histograms) or converted to error bars, reliability predictions,
tolerance zones, and confidence intervals. Wittwer [214] demonstrated the basic
principle of stochastic uncertainty propagation behind Monte Carlo simulation as
shown in Fig. 3.5.
The steps in Monte Carlo simulation corresponding to the uncertainty propagation
shown in Figure 3.3 are fairly simple, and can be easily implemented in Excel for
simple models. All we need to do is follow the five simple steps listed below:
Step 1: Create a parametric model, y = f(x1, x2, ..., xq).
Step 2: Generate a set of random inputs, xi1, xi2, ..., xiq.
Step 3: Evaluate the model and store the results as yi.
Step 4: Repeat steps 2 and 3 for i = 1 to n.
![Page 16: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/16.jpg)
65
Step 5: Analyze the results using histograms, summary statistics, confidence
intervals, etc.
Figure 3.5 Schematic showing the basic principle of stochastic uncertainty
propagation
3.3 ANALYTICAL HIERARCHY PROCESS
The Analytic Hierarchy Process (AHP) is a structured technique for dealing
with complex decisions. Rather than prescribing a "correct" decision, the AHP helps
decision makers find one that best suits their goal and their understanding of the
problem—it is a process of organizing decisions that people are already dealing with,
but trying to do in their heads.
Based on mathematics and psychology, the AHP was developed by Thomas L.
Saaty[215] in the 1970s and has been extensively studied and refined since then. It
provides a comprehensive and rational framework for structuring a decision
problem, for representing and quantifying its elements, for relating those elements
to overall goals, and for evaluating alternative solutions.
Users of the AHP first decompose their decision problem into a hierarchy of more
easily comprehended sub-problems, each of which can be analysed independently.
![Page 17: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/17.jpg)
66
The elements of the hierarchy can relate to any aspect of the decision problem—
tangible or intangible, carefully measured or roughly estimated, well- or poorly-
understood—anything at all that applies to the decision at hand.
Once the hierarchy is built, the decision makers systematically evaluate its various
elements by comparing them to one another two at a time, with respect to their
impact on an element above them in the hierarchy. In making the comparisons, the
decision makers can use concrete data about the elements, or they can use their
judgments about the elements' relative meaning and importance. It is the essence of
the AHP that human judgments, and not just the underlying information, can be
used in performing the evaluations[216].
The AHP converts these evaluations to numerical values that can be processed and
compared over the entire range of the problem. A numerical weight or priority is
derived for each element of the hierarchy, allowing diverse and often
incommensurable elements to be compared to one another in a rational and
consistent way. This capability distinguishes the AHP from other decision making
techniques.
In the final step of the process, numerical priorities are calculated for each of the
decision alternatives. These numbers represent the alternatives' relative ability to
achieve the decision goal, so they allow a straightforward consideration of the
various courses of action.
3.3.1 PROCEDURE FOR USING THE AHP
1. Model the problem as a hierarchy containing the decision goal, the
alternatives for reaching it, and the criteria for evaluating the alternatives.
2. Establish priorities among the elements of the hierarchy by making a series of
judgments based on pairwise comparisons of the elements. For example,
when comparing potential real-estate purchases, the investors might say
they prefer location over price and price over timing.
3. Synthesize these judgments to yield a set of overall priorities for the
hierarchy. This would combine the investors' judgments about location, price
and timing for properties A, B, C, and D into overall priorities for each
property.
4. Check the consistency of the judgments.
5. Come to a final decision based on the results of this process.
These steps are more fully described below.
![Page 18: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/18.jpg)
67
3.3.2 MODEL THE PROBLEM AS A HIERARCHY
The first step in the Analytic Hierarchy Process is to model the problem as
a hierarchy. In doing this, participants explore the aspects of the problem at levels
from general to detailed, then express it in the multileveled way that the AHP
requires. As they work to build the hierarchy, they increase their understanding of
the problem, of its context, and of each other's thoughts and feelings about both
[217].
An AHP hierarchy is a structured means of modeling the decision at hand. It consists
of an overall goal, a group of options or alternatives for reaching the goal, and a
group of factors or criteria that relate the alternatives to the goal. The criteria can be
further broken down into subcriteria, sub-subcriteria, and so on, in as many levels as
the problem requires. A criterion may not apply uniformly, but may have graded
differences like a little sweetness is enjoyable but too much sweetness can be
harmful. In that case the criterion is divided into subcriteria indicating different
intensities of the criterion, like: little, medium, high and these intensities are
prioritized through comparisons under the parent criterion, sweetness.
The design of any AHP hierarchy will depend not only on the nature of the problem
at hand, but also on the knowledge, judgments, values, opinions, needs, wants, etc.
of the participants in the decision making process. Constructing a hierarchy typically
involves significant discussion, research, and discovery by those involved. Even after
its initial construction, it can be changed to accommodate newly-thought-of criteria
or criteria not originally considered to be important; alternatives can also be added,
deleted, or changed[218].
To better understand AHP hierarchies, consider a decision problem with a goal to be
reached, three alternative ways of reaching the goal, and four criteria against which
the alternatives need to be measured.
Such a hierarchy can be visualized as a diagram as shown in fig.3.6 below, with the
goal at the top, the three alternatives at the bottom, and the four criteria in
between. There are useful terms for describing the parts of such diagrams: Each box
is called a node. A node that is connected to one or more nodes in a level below it is
called a parent node. The nodes to which it is so connected are called its children.
![Page 19: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/19.jpg)
68
Fig. 3.6 A simple AHP hierarchy.
3.3.3 Evaluate the hierarchy
Once the hierarchy has been constructed, we analyze it through a series of pairwise
comparisons that derive numerical scales of measurement for the nodes. The criteria
are pairwise compared against the goal for importance. The alternatives are pairwise
compared against each of the criteria for preference. The comparisons are processed
mathematically, and priorities are derived for each node.
3.3.4 Establish priorities
Priorities are numbers associated with the nodes of an AHP hierarchy. They
represent the relative weights of the nodes in any group.
Like probabilities, priorities are absolute numbers between zero and one, without
units or dimensions. Depending on the problem at hand, "weight" can refer to
importance, or preference, or likelihood, or whatever factor is being considered by
the decision makers.
Priorities are distributed over a hierarchy according to its architecture, and their
values depend on the information entered by users of the process. Priorities of the
Goal, the Criteria, and the Alternatives are intimately related, but need to be
considered separately.
By definition, the priority of the Goal is 1.000. The priorities of the Alternatives
always add up to 1.000. Things can become complicated with multiple levels of
Criteria, but if there is only one level, their priorities also add to 1.000. All this is
illustrated by the priorities in the Fig. 3.7 below.
![Page 20: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/20.jpg)
69
Fig.3.7 Simple AHP hierarchy with associated default priorities.
The priorities shown are those that exist before any information has been entered
about weights of the criteria or alternatives, so the priorities within each level are all
equal. They are called the hierarchy’s default priorities. If a fifth Criterion were
added to this hierarchy, the default priority for each Criterion would be .200. If there
were only two Alternatives, each would have a default priority of .500.
We now calculate the geometric mean of ith row and normalize the geometric means
of each row in the comparison matrix to obtain the normalized weight (Wj) of each
factor by using eqns.3.11 and 3.12. The normalized weaights are expressed in the
form of A25x1 matrix,as under.
Two additional concepts apply when a hierarchy has more than one level of
criteria: local priorities and global priorities. Consider the hierarchy shown below in
fig.3.8, which has several Sub-criteria under each Criterion.
Fig.3.8 A more complex AHP hierarchy, with local and global default priorities.
![Page 21: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/21.jpg)
70
3.3.4 MATHEMATICAL EXPRESSIONS USED FOR AHP PROCEDURE
Once our objectives are clearly defined we construct a pair-wise comparison matrix,
using Satty’s scale of relative importance, comparing factor i with factor j which
yields a square matrix A15x5 where rij denotes the comparative importance of factor i
with respect to factor j. In the matrix rij = 1 where i = j and rij =1/rij. Based on our
experience and the reviewed literature we allocate intensities to the compared
factors and obtain the square comparison matrix A15x5.
Table 3.2 Saaty’s intensities of importance[219]
Intensity of importance
Definition Explanation
1
Equal importance Two activities contribute equally to the objective
3 Weak Importance The judgment is to favor one activity over Another, but it is not conclusive
5 Essential or Strong Importance
The judgment is strongly in the favor of one activity over another
7 Demonstration importance
The Conclusive judgment as to the Important of one activity over another
9 Absolute
The judgment in the favor of one activity over another is of the highest possible order of affirmation
2,4,6,8 Intermediate values between the two adjacent judgments
When compromise is needed
We now calculate the geometric mean of ith row and normalize the geometric means
of each row in the comparison matrix to obtain the normalized weight (Wj) of each
factor by using eqns.3.11 and 3.12. The normalized weaights are expressed in the
form of A25x1matrix.
1/
1
NN
i ij
j
GM a
(3.11)
N
i
i
i
j
GM
GMW
1
(3.12)
Matrix A3 5x1 is calculated as A3 5 x1 = A15x5 x A25 x 1
λmax is worked out which is nothing but the average of matrix A45x1 which is given by
(A45x1 = A35x1 / A25x1) and can be expressed by eqn. 3.13 as
![Page 22: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/22.jpg)
71
5 1
1
13 /
5
N
avg iii
A W
(3.13)
If the value of λavg is closer to the number of attributes n, the result is more
consistent. The deviation from consistency is represented by Consistency Index (CI)
and is obtained by eqn. 3.14.
1
max
N
NCI
(3.14)
Random Index or correction for random error is denoted by RI and their values for
different values of attributes (n) are given by Saaty, as shown in table 3.3.
Table 3.3 RI Values of different values of n
N 1 2 3 4 5 6 7 8 9
RI 0 0 0.58 0.90 1.12 1.24 1.32 1.41 1.45
Finally we calculate the Consistency Ratio, CR, which is the ratio of Consistency Index
to the Random Index, and is ratio given by,
CR = CI/RI
CR value of less than 0.1 indicates minimal deviation from accuracy which validates
our choice of comparison matrix and confirms good consistency in choices of relative
importance values assigned to the process parameters.
3.4 TECHNIQUE OF ORDER PREFERENCE BY SIMILARITY TO IDEAL SOLUTION
(TOPSIS)
Multi-attribute decision-making (MADM) techniques are employed to help decision-
makers to identify the best alternative from a finite set. MADM techniques have
been successfully applied in the selection of work materials [220, 221], rapid
prototyping processes [222], thermal power plants [223], industrial robots [224],
evaluation of projects [225], mobile phones [226], product design [227], flexible
manufacturing systems[228], performance measurement models for manufacturing
organizations [229], plant layout design [230], and so on. Hwang and Yoon [231]
developed TOPSIS to assess the alternatives while simultaneously considering the
distance to the ideal solution and negative-ideal solution, regarding each alternative,
and selecting the closest relative to the ideal solution as the best alternative.
![Page 23: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/23.jpg)
72
The selection of an alternative from amongst a list of possible alternatives on the
basis of several attributes is clearly a multiple attribute decision making problem for
which the TECHNIQUE of ORDER PREFERENCE by SIMILARITY to IDEAL SOLUTION
(TOPSIS) provides a simple yet powerful decision making tool. TOPSIS method is
based on the concept that the chosen alternative should not only have the shortest
Euclidean distance from the ideal solution but also have the farthest Euclidean
distance from the negative ideal solution. TOPSIS thus provides a solution that is not
only closest to the hypothetical best, but is also the farthest from the hypothetically
worst. Combined multi-attribute decision-making is aimed at integrating different
measures into a single global index which facilitates ranking alternatives on the basis
of their suitability.
If each attribute has a monotone increasing (or decreasing) function, the ideal
solution, which is composed of the best attribute values, and the negative ideal
solution, which is composed of the worst, are calculated. From the viewpoint of
geometry, an alternative with the shortest Euclidean distance from the ideal solution
is chosen, i.e. the best alternative is the nearest one to the ideal solution and the
farthest one from the negative ideal solution [232, 233]. The AHP can efficiently deal
with tangible and non-tangible attributes in the light of subjective judgements of
different individuals in the process of decision-making [234]. However, in some
cases, an unmanageable number of pair-wise comparisons of attributes and
alternatives with respect to each of the attributes may result. TOPSIS is more
efficient in dealing with the tangible attributes and the number of alternatives to be
assessed. However, the TOPSIS method needs a powerful procedure to determine
the relative importance of different attributes with respect to the objective; AHP
provides such a procedure. Hence, to take advantage of both the methods, a
combined MADM (using TOPSIS and AHP) approach is adopted to select the most
suitable alternative from amongst the available alternatives. The procedures for
implementing this combined TOPSIS–AHP method are described below.
Step 1:
Model the problem as a hierarchy containing the decision goal, the alternatives for
reaching it, and the criteria for evaluating the alternatives. The goal presents the
optimum solution of the decision problem. It can be selection of the best alternative
among many feasible alternatives. Also, the ranking of all alternatives can be
performed, by obtaining the priorities. Criteria (attributes) are the quantitative or
qualitative data (judgments) for evaluating the alternatives.
![Page 24: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/24.jpg)
73
Fig.3.9 TOPSIS model with “n” criteria and “m” alternatives
Step 2:
Construct a decision matrix such that each row of this matrix is allocated to one
alternative, and each column to one attribute. Therefore, an element dij of the
decision matrix D gives the value of jth attribute, in original real values and units, for
the ith alternative. Thus, if the number of alternatives is M and the number of
attributes is N, then the decision matrix is an MxN matrix can be represented as
follows,
[D]MxN = [D]4x4 = [M]ij
a11 a12 a13 a14
a21 a22 a23 a24
[D] = a31 a32 a33 a34 (3.15)
a41 a42 a43 a44
Step 3:
Obtain the normalized decision matrix, Rij by using eqn.3.16.
2/1
1
2/
M
j
ijijij mmR (3.16)
Step 4:
Determine the relative importance of different attributes with respect to the
objective for assignment of weightage to different attributes for logical decision-
making by application of Analytical Hierarchy Process (AHP), as already described in
the last section 3.3.
![Page 25: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/25.jpg)
74
Step 5:
The weighted normalized matrix Vij is obtained by the multiplication of each element
of the column of the matrix Rij with its associated weight wj , obtained from AHP
procedure, using Eqn. 3.17.
Vij = wj Rij (3.17)
Step 6:
Obtain the ideal (best) and negative ideal (worst) solutions in this step. The ideal
(best) and negative ideal (worst) solutions can be expressed as:
V+ =
max min
/ , / ' / 1,2,.....i i
Vij j J Vij j J i M
V+ 1 2 3, , , ..., NV V V V (3.18)
V- =
min max
/ , / ' / 1,2,.....,i i
Vij j J Vij j J i M
V- 1 2 3, , ,..., NV V V V (3.19)
Where J = (j = 1, 2… N) / j is associated with the beneficial attributes and J’ = (j = 1,
2… N) / j is associated with the non-beneficial attributes. Vj indicates the ideal (best)
value of the attribute for different alternatives. In case of beneficial attributes (i.e.
whose higher values are desirable for the given application), Vj indicates the higher
value of the attribute. In case of non-beneficial attributes (i.e. whose lower values
are desired for the given application), Vj indicates the lower value of the attribute. Vj
indicates the negative ideal (worst) value of the attribute for different alternatives.
In case of beneficial attributes (i.e. whose higher values are desirable for the given
application), Vj indicates the lower value of the attribute. In case of non-beneficial
attributes (i.e. whose lower values are desired for the given application), Vj indicates
the higher value of the attribute.
Step 7:
Obtain the separation measures which indicate the separation of each alternative
from the ideal solution as given by the Euclidean distance using eqns. 3.20 and 3.21
Si+
0.5
2
1
N
ij j
j
V V
Ni ....,,2,1 (3.20)
![Page 26: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/26.jpg)
75
Si-
0.5
2
1
,N
ij j
j
V V
Ni ....,,2,1 (3.21)
Step 8:
The relative closeness of a particular alternative to the ideal solution, expressed as Pi
is calculated using eqn.3.22.
Pi = Si- / (Si
+ + Si
-) (3.22)
The preferred feasible solutions, Pi may also be called as overall or composite
performance score of alternative. This relative closeness to ideal solution can be
considered as the “Global Index (GI).”
Step 9:
A set of alternatives are arranged in descending order of their composite
performance score, Pi values, indicating the most preferred solution for highest Pi
value and the least preferred solution for the lowest Pi value.
3.5 MULTI-PERFORMANCE OPTIMIZATION TECHNIQUES
It requires a consideration of diverse quality characteristics for evaluating any
product or service in order so as to be able to make a rational choice. These
evaluations of diverse quality characteristics should be combined to arrive at a
composite index which may suitably represent the overall utility of a product or
service. The overall utility of a product measures the usefulness of that product from
the evaluators perspective. Whereas, the utility of a product based on a particular
characteristic measures the usefulness of that particular characteristic only. The
utility concept proposes that the overall utility of a product is the sum of utilities of
each of the quality characteristics.
3.5.1 UTILITY CONCEPT
Kumar et al.[235] suggested that in accordance to the utility theory, if Xi is the
measure of effectiveness of an attribute (quality characteristics) i and there are n
attributes evaluating the outcome space, then the overall utility function is given by:
U (X1, X2, X3…, Xn) = f (U1 (X1), U2 (X2)… Un (Xn)) (3.23)
where, Ui (Xi) is the utility of the ith attribute.
The overall utility function is the sum of individual utilities if the attributes are
independent, and is given by:
![Page 27: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/27.jpg)
76
∑
where, Ui(Xi) is the utility of the ith attribute. The attributes may be assigned
weights depending upon the relative importance or priorities of the characteristics.
The overall utility function after assigning weights to the attributes can be written as:
∑
where Wi, is the weight assigned to attribute i and the sum of the weights for all
attributes is equal to 1.
To determine the utility value for a number of quality characteristics, a preference
scale for each quality characteristic is constructed and later these scales are
weighted to obtain a composite number (overall utility). The Preference Scale may
be linear, exponential or logarithmic. The minimum acceptable quality level for each
quality characteristic is set at a preference number of 0 and the best available quality
is assigned a preference number of 9 ( the preference numbers for minimum or best
values of characteristics is optimal). Gupta and Murthy [236] suggested that if a log
scale is chosen, the preference number, Pi, is given as eqn.3.26.
Pi = A log
(3.26)
where Xi is the value of the quality characteristic or attribute i, Xi” is the minimum
acceptable value of the quality characteristic or attribute i and A is a constant.
Arbitrarily, we may choose A such that Pi = 9 at Xi = X*, where X* is the optimum
value of Xi assuming that such a number exists.
The next step is to assign weights or relative importance to the quality characteristic.
Bosser [237] suggested a number of methods for the assignment of weights (AHP,
Conjoint Analysis, etc). The weights should be assigned such that the following
condition holds:
∑
The overall utility can be calculated as:
∑
![Page 28: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/28.jpg)
77
PROCEDURE FOR MULTI-CHARACTERISTIC OPTIMIZATION USING TAGUCHI’S
PHILOSOPHY AND UTILITY CONCEPT
1. Find optimal values of the selected quality characteristics separately using
Taguchi’s experimental design and analysis (parameter design).
2. Using the optimal values and the minimum quality levels, construct
preference scales for each quality characteristic using eqn.3.26.
3. Assign weights Wi, i = 1,2,…,n, to various quality characteristics based on
experience and the end use of the product such that the sum of weights is
equal to 1.
4. Find utility values of each product against each trial condition of the
experiment using eqn. (4.28).
5. Use these values as a response of the trial conditions of the selected
experimental plan.
6. Analyse results using the procedure suggested by Taguchi [238]
7. Find the optimal settings of the process parameters for optimum utility
(mean and minimum deviation around the mean).
8. Predict the individual characteristic values considering the optimal significant
parameters determined in step 7.
9. Conduct a confirmation experiment at the optimal setting and compare the
predicted optimal values of the quality characteristics with the actual ones.
3.5.2 GREY RELATIONAL ANALYSIS
Optimization of multiple response characteristics is more complex compared to
optimization of single performance characteristics. In recent years, the theories of
grey relational analysis have attracted the interest of researchers. Deng [239]
proposed application of the principles of grey relational analysis as a method of
measuring degree of approximation among sequences according to the grey
relational grade. In the grey relational analysis, the measured values of the
experimental results are first normalized in the range between zero and one, which
is also called grey relational generation. Next, the grey relational coefficients are
calculated from the normalized experimental results to express the relationship
between the desired and the actual experimental results. The next step involves
assignment of weighting factors to each quality characteristic. Then, the grey
relational grades are computed by averaging the grey relational coefficient
corresponding to each performance characteristic. The overall equation of the multi-
performance characteristic is based on the grey relational grade. As a result,
optimization of the complicated multi-performance characteristics can be converted
into optimization of a single grey relational grade. The optimal level of the process
parameters is the level with the highest grey relational grade. Further, a statistical
student’s t-test was performed to identify the statistically significant parameters. In
![Page 29: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/29.jpg)
78
addition, empirical model was developed for grey relational grade. Hence, an
empirical model for the multi-objective optimization is available. This response
surface model for the grey relational grade was further used for the optimization of
the process parameters. Thus, grey relational analysis coupled with RSM has been
employed to identify the optimum parameter settings of the significant factors.
Finally, a confirmation experiment was conducted to confirm the optimum levels of
the process parameters identified by the optimization method.
Based on the above discussion, the use of the grey relational analysis with Taguchi
design of experiment to optimize the process parameters considering multiple
performance characteristics includes the following steps as suggested by Siddiqui et
al.[240] and Lue et al.[241].
1. Normalize the experimental results by data pre-processing which is basically
a means of transferring the original sequence to a comparable sequence.
2. Perform the grey relational generating and calculate the corresponding grey
relational coefficient.
3. Assignment or Calculation of weighting factors to each quality characteristic.
4. Calculate the grey relational grade by averaging the grey relational
coefficient.
5. Plot the average responses at each level of parameter.
6. Select the optimal levels of process parameters.
7. Conduct confirmation experiments.
Data pre- processing
Data pre-processing is normally required since the range and unit in one data
sequence may differ from the others. Data pre-processing is also necessary when the
sequence scatter range is too large, or when the direction of the target in the
sequence are different. Data pre-processing is a means of transferring the original
sequence to a comparable sequence. Depending on the characteristics of a data
sequence, there are various methodologies of Data pre-processing available for the
gray relational analysis.
If the target value of original sequence is infinite, then it has a characteristic of the
“higher is better” and the original sequence can be normalized by using eqn.3.29.
* ( ) min ( )( )
max ( ) min ( )
o oi i
i o oi i
x k x kk
x k x kx
(3.29)
![Page 30: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/30.jpg)
79
When the “lower is better” is a characteristic of the original sequence, then the
original sequence should be normalized using eqn. 3.30.
* max ( ) ( )( )
max ( ) min ( )
o oi i
i o oi i
x k x kk
x k x kx
(3.30)
However, if there is a definite target value (desired value) to be achieved, the
original sequence may be normalized using eqn.3.31.
* ( )( ) 1
max ( )
o oi
i o oi
x k xk
x k xx
(3.31)
Or, the original sequence can simply be normalized by the most basic methodology,
i.e. let the value of original sequence be divided by the first value of the sequence
using eqn.3.32.
( )( )
(1)
oio
i oi
x kk
xx
(3.32)
Where i = 1… m; k = 1… n. m is the number of experimental data items and n is the
number of parameters. xio
(k) denotes the original sequence, xi*(k) the sequence after
the data pre-processing, max xio(k) the largest value of xi
o(k), min xio (k) the smallest
value of xio(k) and xo
is the desired value.
Gray relational coefficient and gray relational grade
In gray relational analysis, the measure of the relevancy between two systems or
two sequences is defined as the gray relational grade. When only one sequence, xo
(k), is available as the reference sequence, and all other sequence serves as
comparison sequence, it is called a local gray relation measurement. After data pre-
processing is carried out, the gray relation coefficient ξi(k) for the kth performance
characteristics in the ith experiment can be expressed as,
min max( )
( ) maxi k
oi k
(3.33)
Where, Δoi is the deviation sequence of the reference sequence and the
comparability sequence; x0*(k) denotes the reference sequence and xi
*(k) denotes
the comparability sequence. ξ is distinguishing or identification coefficient which is
defined in the range 0 ≤ ξ ≤ 1.
A weighting method is used to integrate the grey relational coefficients of each
experimental run into the grey relational grade, which is a weighting sum of the grey
relational coefficients and we have pressumed equal weightage for both the
![Page 31: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/31.jpg)
80
performance characteristics. Deng [239] reported that it is usual to take the average
value of the grey relational coefficients as the grey relational grade. The grey
relational grade was calculated using eqn. 4.34. The overall evaluation of the
multiple-performance characteristics is based on the grey relational grade.
γi =
n
kn 1
1i(k) (3.34)
However, in a real engineering system, the importance of various factors to the
system varies. In the real condition of unequal weight being carried by the various
factors, the grey relational grade in Eq.3.34 was extended and defined as eqn,3.35.
γi =
n
k
kwn 1
1ξi (k) (3.35)
Where, wk denotes the normalized weight of factor k for the performance
characteristic and n is the number of performance characteristics. The grey relational
grade γi represents the level of correlation between the reference sequence and the
comparability sequence. If the two sequences are identical, then the value of grey
relational grade is equal to 1. The grey relational grade also indicates the degree of
influence that the comparability sequence could exert over the reference sequence,
and then the grey relational grade for that comparability sequence and reference
sequence will be higher than other grey relational grades.
To understand the relationship between the process parameters and the multi-
characteristic grey relation grades, a model has been developed using response
surface methodology as explained in section 3.6. The general expression for the
developed model is given in eqn.3.36.
YGrG = b0 + juiuij
ji1i
2
iujiiui
k
1i
xxbxbxb
(3.36)
Where, Ygrg is the grey relational grade considered as multiple performance
response. Further, the coefficients and constants are denoted by their usual
notations as mentioned and explained in section 3.6.
3.6 RESPONSE SURFACE METHODOLOGY
Response surface methodology (RSM) is a collection of mathematical and statistical
techniques useful for analyzing problems in which several independent variables
influence a dependent variable or response, and the goal is to optimize this
response, Cochran and Cox[242]. In statistics, response surface methodology
(RSM) explores the relationships between several explanatory variables and one or
more response variables. The method was introduced by Box and Wilson [243] in
![Page 32: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/32.jpg)
81
1951. The main idea of RSM is to use a sequence of designed experiments to obtain
an optimal response.
Some extensions of response surface methodology deal with the multiple response
problem[244]. Multiple response variables create difficulty because what is optimal
for one response may not be very optimal for other responses. Other extensions are
used to reduce variability in a single response while targeting a specific value, or
attaining a near maximum or minimum while preventing variability in that response
from getting too large.
In many experimental conditions, it is possible to represent independent factors in
quantitative form as given in Equation 3.37. Then these factors can be thought of as
having a functional relationship with response as follows:
(3.37)
This represents the relation between response Y and x1, x2,…… ,xk of k quantitative
factors. The function Φ is called response surface or response function. The residual
er measures the experimental errors[242]. For a given set of independent variables, a
characteristic surface is responded. When the mathematical form of Φ is not known,
it can be approximated satisfactorily within the experimental region by a polynomial.
Higher the degree of polynomial, better is the correlation but at the same time costs
of experimentation become higher.
For the present work, RSM has been applied for developing the mathematical
models in the form of multiple regression equations for the quality characteristic of
machined parts produced by AFM process. In applying the response surface
methodology, the dependent variable is viewed as a surface to which a
mathematical model is fitted. For the development of regression equations related
to various quality characteristics of AFM process, the second order response surface
has been assumed as:
(3.38)
This assumed surface Y contains linear, squared and cross product terms of variables
xi’s. In order to estimate the regression coefficients, a number of experimental
design techniques are available.
Regression analysis is based on some assumptions, the experimental error (residue)
is normally distributed with the constant variance and errors are normally
distributed. The limitation of the regression analysis is that, it cannot be used for
![Page 33: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/33.jpg)
82
extrapolation. This means that the values of independent variables must lie within
the upper and lower limits those are set at the time of testing. These models give the
predicted value with some error, which cannot be extracted. The error in the
prediction is estimated by the coefficient of determination (R2). The value of the R2 is
an important criteria to decide validity of regression model (Montgomery, 2001). If
this value is 0.8 or more, the relationship established by regression model is
acceptable. To know the behavior of population based sample, adjusted R2 value
gives the level of the validity, to use the regression model for population. Standard
error gives the error in the predicted value of Y.
There is a difference between the predicted and actual value of the response for the
same set of independent variables. It is possible to attribute this difference to a set
of independent variables and the difference due to random or experimental errors.
For analysis of the experimental data, checking of goodness of fit of model is very
much required. Model adequacy checking includes test for significance of regression
model and on model coefficients as suggested by Montgomery [245]. Analysis of
variance (ANOVA) is performed for this purpose. A statistical software program,
Mini-Tab-15[246] and Microsoft excel (MS-Office-2007) were used for training
models.
MODELLING OF THE PROCESS CORRELATING QUALITY CHARACTERISTICS WITH
VARIABLE PARAMETERS
The functional relationship between the output response and the input process
parameters can be represented in a general form by the following expression.
MR = c Pa Ab Mc Nd Le ε1 (3.39)
∆Ra = c1 Pa1Ab1 Mc1 Nd1Le1 ε1 (3.40)
The model can be transformed to logarithmic equation as shown below:
y1 = y – ε = lnc1+ a lnP + blnA + clnM + dlnN + dlnL + lnε1 (3.41)
which represent the following linear mathematical equation,
ὴ = b0x0 + b1x1 +b2x2 + b3x3 + b4x4 +b5x5 (3.42)
where ὴ is the true response of the input parameters on a logarithmic scale, x0 = 1
(dummy
variable), x1, x2, x3, x4 and x5 are logarithmic transformation of the input.
The linear model of eqn.3.40 in terms of the estimated response can be represented
as
Ῠ = y-ε = b0 x0 +b1 x1 +b2 x2 +b3 x3 +b4 x4 + b5x5 (3.43)
![Page 34: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/34.jpg)
83
where Ῠ is the estimated response based on first order equation and y is the
measured response of based on experimental results on a logarithmic scale; ε is the
experimental error and ‘b’ values are the estimates of the corresponding
parameters. The constants and exponents c, a, b, c, d & e can be determined by the
method of least squares. The basic formula is given by eqn.
b = (XT. X)-1 XT Y (3.44)
where the calculation matrices are X and the variance matrix (XT.X)-1, hence the b
values can be determined by using eqn. (3.42) (Montgomery, 2005).
If this model is not sufficient to represent the process, then the second order model
will be developed. The general equation for the response model has been
represented as:
Y = b0 + juiuij
ji1i
2
iujiiui
k
1i
xxbxbxb
(3.45)
The general second order model can be represented as:
Y2 = y-ε = b0 x0 +b1 x1 +b2 x2 +b3 x3 +b4 x4 + b12 x1x2 +b23 x2x3+ b14 x1x4
+ b24 x2x4 + b13 x1x3 + b34 x3x4 + b11 x12 + b22 x2
2 + b33 x32 + b44 x4
2 (3.46)
where Y2 is the estimated response based on second order equation. The
parameters, i.e. b0, b1, b2, b3, b4, b12, b23, b14 are to be estimated by the method of
least squares, eqn (3.42). x12, x2
2, x32 and x4
2 are the quadratic effects of these
variables and x1x2, x1x3, x2x3, x1x4 and x3x4 represent the interaction between them;
b0, b1, b2,…,….b14 are the regression coefficient to be estimated. In order to
understand the process, the experimental values are used to develop the
mathematical models using response surface method. In this work, commercially
available mathematical software package MINITAB-15 was used for the computation
of the regression constants and exponents.
The variables are coded by taking into account the capacity and the limiting
condition of the process. The variables are transformed according to natural
logarithmic equation as follows:
![Page 35: CHAPTER III - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/11278/12/12_chapter 3.p… · small-scale experiments to reduce variability and remain cost-effective, and robust](https://reader034.vdocuments.site/reader034/viewer/2022050208/5f5aca672cc39615946d4892/html5/thumbnails/35.jpg)
84
Lnxn- ln xn0 (3.47)
ln xn1 - ln xn0
where, x is the coded value of any factor corresponding to its natural value xn, xn1 is
the natural value of the factor at the +1 level and xn0 is the natural value of the factor
at the middle level.
Assuming null hypothesis in the form
H0 = 0 (b0 = b1 = b2 = b3 = b4 = b5 = 0) (3.48)
That is to say, none of the parameters have significant influence on output response.
X =