annrovp - tdl

198
ANALYSIS OF THE PARTS/MACHINES GROUPING PROBLEM IN GROUP TECHNOLOGY MANUFACTURING SYSTEMS by OLIVER EKEPRE CHARLES, B.S., M.S. in I.E. A DISSERTATION IN INDUSTRIAL ENGINEERING Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY IN INDUSTRIAL ENGINEERING Annrovp.d May, 1981

Upload: others

Post on 24-Apr-2022

36 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Annrovp - TDL

ANALYSIS OF THE PARTS/MACHINES GROUPING

PROBLEM IN GROUP TECHNOLOGY

MANUFACTURING SYSTEMS

by

OLIVER EKEPRE CHARLES, B.S., M.S. in I.E.

A DISSERTATION

IN

INDUSTRIAL ENGINEERING

Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for

the Degree of

DOCTOR OF PHILOSOPHY

IN

INDUSTRIAL ENGINEERING

Annrovp.d

May, 1981

Page 2: Annrovp - TDL

m '\:S

ACKNOWLEDGMENTS

I wish to express my appreciation to my advisor.

Dr. Brian K. Lambert, for his guidance and help through­

out the preparation of this dissertation; I am also

grateful to the other members of my committee, Drs. Lee

Alley, James R. Burns, Richard A. Dudek and Milton L.

Smith for their valuable suggestions. My thanks also

go to the other members of the faculty of the Industrial

Engineering Department whose technical assistance has

made this research possible.

I am indebted to my parents. Chief Charles Owaba,

Mrs. Animi Owaba, my uncle, Mr. Claudius Abere, who have

provided me with the financial support throughout this

research. I am also grateful to Mrs. Sue Haynes who

performed an excellent work in typing this material.

Finally, this work is dedicated to the memory of

my late brother, Olaud A. Charles, who passed away in

the year I was engaged in this research.

11

Page 3: Annrovp - TDL

TABLE OF CONTENTS

ACKNOWLEDGMENTS ii

LIST OF TABLES vi

LIST OF FIGURES vii

NOTATION xi

I. INTRODUCTION 1

1.1 The Batchshop Problem and Group Technology 2

1.2 The Grouping Problem 3

1.2.1 Cluster Analysis and GT Problem 4

1.2.2 Group Technology Types 5 1.2.3 Summary of Research

Problem 8

1.3 Review of Past Studies 9 1.4 Critique of Previous Work and

Study Objectives 19 1.5 Outline of Succeeding Chapters 22

II. GROUPING CRITERIA 23

2.1 Introduction 23 2.2 Production Cost of GT 24 2.3 Objective Function in the PCA 25

2.3.1 Set-up Time-Similarity

of Parts Model 26

2.4 Grouping Objective in the PFA 49

III. CHARACTERIZATION OF GT PROBLEMS AND OPTIMALITY CONDITIONS 57 3.1 Introduction 57

111

Page 4: Annrovp - TDL

3.2 Relation Between Similarity of Parts in the Same Groups, H, and Similarity of Parts in Different Groups, L 58

3.3 Characterization of GT Problem Data 67

3.4 Optimality Conditions 73

IV. GROUPING ALGORITHM 76

4.1 Introduction 76 4.2 Heuristic for Initial Solution 77 4.3 Theoretical Background of

Algorithm 88

4.3.1 Grouping Procedure 95 4.3.2 Evaluation of H, L 103 4.3.3 Details of Grouping

Algorithm: The Gradient Technique 106

4.3.4 Computational Experience .... 115

V. NUMBER OF GROUPS AND PRODUCTION COST 122

5.1 Introduction 122 5.2 Study Procedure 122 5.3 Test Problems 123 5.4 Discussion and Presentation

of Results 127 5.5 Additional Machine Cost and

Number of Groups 142 5.6 Set-Up Cost and Number of

Groups 14j 5.7 Production Cost and Number of

Groups 145 5.8 General Observations 147 5.9 Possible Applications 148

VI. CONCLUSIONS 150

6 .1 Summary of Research 150 6.2 Conclusions 150 6.3 Recommendations for Further

Study 154 LIST OF REFERENCES 157

IV

Page 5: Annrovp - TDL

APPENDIX A: LISTING OF FORTRAN IV PROGRAM OF THE GRADIENT ALGORITHM 161

APPENDIX B: OPITZ PART CLASSIFICATION SYSTEM ... 175

APPENDIX C: TABLES OF GROUP HOMOGENEITY (H), NUMBER OF ADDITIONAL MACHINES (A) AGAINST NUMBER OF GROUPS 179

Page 6: Annrovp - TDL

LIST OF TABLES

Table Page

4.1 FLOWCHART DETAIL 106

4.2 SOLUTION VALUES AND ALGORITHM TIME OF THE GRADIENT TECHNIQUE 117

4.3 ALG0RITHr4 TIME VERSUS NUMBER OF PARTS AND GROUPS 119

5.1 SUMMARY OF TEST PROBLEMS 126

VI

Page 7: Annrovp - TDL

LIST OF FIGURES

F^g^re Page

1 Disk-Like Part 12

1.1 Parts Characteristics Matrix (PCM) showing only design and processing features. Opitz coding system is used 14

1.2 Parts Characteristics Matrix (PCM) showing only machines indicated in process route 14

2.1 Set-up task-characteristic Binary Interaction Matrix 32

2.2 Grouping solution resulting from minimizing A 54

2.3 Grouping solution resulting from minimizing L 55

3.1(a) Parts for Example 4.1 60

3.1(b) The parts in Figure 4.1(a) shown in Opitz' code number system 61

3.1(c) Similarity coefficients of the

parts shown in Figure 4.1(a) 61

3. 2 Multiple Population Problem 68

3.3 Universal Population Problem 70

3.4 Natural Population Problem: 0<H<1, 0<L<1 70

3.5 Null-Relation Population Problem: H = 0 , L = 0 70

4.1 Flowchart of the Preference Index Heuristic 81

4.2 Illustration of step operation 91

vi:.

Page 8: Annrovp - TDL

F^g^re Page

4.3 Flowchart of the Gradient Technique 107

4.4 Solution time versus number of parts 120

4.5 Algorithm time versus number of parts and groups 121

5.1 Group homogeneity versus niomber of groups ;n=60,J^ = .6 128

5.2 Group homogeneity versus niomber of groups ; n = 6 0 , f i = .3 128

5.3 Group homogeneity versus number of groups ; n = 8 0 , Q = .6 129

5.4 Group homogeneity versus number of groups; n = 80, = .3 129

5.5 Group homogeneity versus number of groups; n = 100, Q = .6 130

5.6 Group homogeneity versus number of groups; n = 100, ^ = .3 130

5.7 Group homogeneity versus number of groups; n = 120 ,^ = .6 131

5.8 Group homogeneity versus number of groups; n = 120,^ = .3 131

5.9 Group homogeneity versus number of groups; n = 86,Q = .53 132

5.10 Number of additional machines versus number of groups; n = 60, number of each machine tye = 1, PFA 134

5.11 Nianbe- of additional machines versus number of groups; n = 80, number of each machine type = 1, PFA 134

VXIL

Page 9: Annrovp - TDL

liSHEl Page

5.12 Number of additional machines versus number of groups; n = 100, number of each machine type = 1, PFA 135

5.13 Number of additional machines versus number of groups; n = 120, number of each machine type = 1, PFA 135

5.14 Number of additional machines versus number of groups (Bur-bidge's); n = 43, Q = 1.0, number of each machine type = 1, PFA 136

5.15 Number of additional machines versus number of groups (Pur-check's); n = 82, a ^ .45, number of each machine type = 1 136

5.16 Nijmber of additional machines number of groups; n = 60, Q = .3, number of each machine type; uniformly distributed (1, 3) 137

5.17 Nimiber of additional machines versus number of groups; n = 60, Q = .6; number of each machine type: uniformly distributed (1, 3) 137

5.18 Number of additional machines versus niomber of groups n = 80, Q = .3, uniformly distributed (1, 3) number of each machine type 138

5.19 Nimiber of additional machines versus number of groups; n = 80, ^ = .6; uniformly distributed (1, 3) ntimber of each machine type 138

5.20 Number of additional machines versus number of groups; n = 100, Q - .3; uniformly distributed (1, 3) number of each machine type 139

5 . 21 Nijmber of additional machines versus number of groups; n = 100, ^ =.6; uniformly distributed (1, 3) number of each machine type 139

ix

Page 10: Annrovp - TDL

Figure Page

5.22 Number of additional machines versus number of groups; n = 120, U - .3; uniformly distributed (1, 3) number of each machine type 140

5.23 Number of additional machines versus number of groups; n = 120, Q - .3, uniformly distributed (1, 3) number of each machine type 140

5.24 An example of possible relation between set-up cost, additional machine cost, production cost and number of groups; n = 100, Q = .3, number of each machine type = 1, GT approach = PCA 144

X

Page 11: Annrovp - TDL

NOTATION

GT

G

n

N

M

PCA

PFA

V(G)

B. 1

"15 r. 1

A

H

L

T

E

Group Technology

Grouping, Combination of Parts or Partition

Total Number of Farts

Ntjmber of Groups

Nisnber of Machine Type

Nixmber of Parts in Group k

Parts Classification Approach

Production Flow Analysis Approach

Objective Functions Defined in Terms of G

Similarity Coefficient Between Parts i and j

Set of Attributes Possessed by Part i

Ntanber of Objects in a Set

Dissimilarity Coefficient

Process Route for Part i

Production Cost

Set-Up Cost

Cost for Additional Machines

Number of Additional Machines

Group Homogeneity

Link Between Groups

Set-Up Time for Task k

Set of Set-Up Time Elements

Set of Set-Up Task States

XI

Page 12: Annrovp - TDL

^k

^ka

^ijk

ijk

D

V

u

W

d m

mk

T(n,N):

X

P

State of Set-Up Task k

Binary State Expressing the Possible Effect of Characteristic a on Set-Up Task k

Set-Up Task - Characteristic Interaction Matrix

State of Similarity

State of Dissimilarity

Dissimilarity State Vector

Nimiber of Set-Up Tasks

Number of Characteristics of Part in the PCA

Set of Weight of Characteristics of Parts

Relative importance (weight) of the k Characteristic

Difference Between Ntimber of Additional Machines of Type m and Those in the Conventional System

State Specifying Whether Machine Type m Occurs in Group k

Step Size in the Classical Gradient Method

Number of Possible Partitions in a Grouping Problem of n parts N groups

Gradient Vector of the Function V(G)

Set of All Possible V(G)

Grouping Procedure

Set of Partitions in the Neighborhood of a Partition Using the Transfer Step to Create Partitions

Same as Above Except That Interchange Step is Used

Preference Index

XI1

Page 13: Annrovp - TDL

CHAPTER I

INTRODUCTION

Group Technology (GT) is a problem-solving philosophy

based on the premise "that many problems are similar and

that by grouping similar problems, a single solution can be

found to a set of the problems, thus saving time and effort"

[46]. In an attempt to find an efficient solution to the

numerous machine set-up problems that characterize batch

production, researchers have applied Group Technology

principles to organized batch production systems. These

are known as Group Technology Systems. One of the basic

GT design tasks is the formation of production groups such

that parts with similar machine set-up requirements are

processed in one group.

The first formal proposal of Group Technology Sys­

tems was presented by a Russian, Mitrofanov (S^] , in the

1940s. Mitrofanov demonstrated that Group Technology can

provide a suitable framework to reduce machine set-up

times. Several researchers since Mitrofanov have consi­

dered other problems associated with GT. Opitz of West

Germany [35, 36] and others in Europe and America [2, 14,

19], for instance, dealt with methods of coding data con­

cerning parts to be grouped. Other researchers have at­

tempted to develop computer based grouping procedures [8,

38, 39].

Page 14: Annrovp - TDL

2

In this research one of the basic problems of Group

Technology, viz; the optimal grouping of parts will be

examined. In particular, criteria of optimization, group­

ing procedures, and the effect of number of groups on pro­

duction cost will be analyzed.

1.1 The Batchshop Problem and Group Technology

Consider a conventional static batchshop problem

characterized by a set of machines and a known number of

parts. The machines are grouped into subsets. Those that

perform identical functions (drills, for example) laid out

in departments constitute a subset. Each part is described

by an m-dimensional row vector of attributes (attributes

will be interchangeably used with characteristics). These

attributes may be parts design data: shape, size, material

type, accuracy requirement, features to be processed, etc.;

and/or planning data: machines in process route, proces­

sing and set-up times, number of units to be produced, etc.

For n parts, the row vectors of characteristics combine to

form Parts-Characteristics Matrix (PCM).

The difference in the characteristics of parts intro­

duce a high degree of complexity to the batchshop problem.

There is a complex flow pattern created by the differences

in process route; the functional layout can cause parts to

cover long distances between departments. At each machine

Page 15: Annrovp - TDL

center parts in queues may have different machine set-up

problems. The net effect of this complexity is the dif­

ficulty of attaining the level of productivity often

achieved in mass production situations. Previous studies

indicate that the average part in a conventional batchshop

spends only a relatively small percentage of its time in

actual metal removal; a larger percentage of time is spent

in waiting for machines to be set-up, other parts to be

processed or traveling between departments and machine

centers [32]. Thus, the machine set-up and transportation

problems appear to be potential areas where productivity

can be improved. These problems are the primary focus of

the Group Technology approach. The grouping of similar

parts and the machines that process them may allow parts

in a group to use the same machine set-up. Thus, to design

a GT system, a parts/machines grouping problem has to be

solved. This problem, which is the subject of this study,

will now be formally stated.

1.2 The Grouping Problem

Let G be a Group Technology system and g^ be a pro­

duction group comprising a set of similar parts. Thus,

V(G) is a performance measure which may be production cost

Page 16: Annrovp - TDL

L L L L a

or other cost-related functions. Now, consider a conven­

tional batchshop with n parts presented in the form of a

Parts Characteristics Matrix (PCM) . The set of character­

istics of each part are known and remain unchanged (inclu­

ding a fixed process route). The grouping problem is that

of defining similarity of parts and assigning the n parts

to production groups {g^} on the basis of similarity, such

that the performance measure V(G) may be optimized.

The problem of grouping objects on the basis of simi­

larity has been identified in several disciplines; plant

taxonomy, information storage and retrieval, patients in

hospitals, counseling,to mention just a few. Problems of

this type are known as Cluster Analysis problems [42].

To properly categorize the GT grouping problem, therefore,

a brief outline of the classes of the clustering problem

will be useful.

1.2.1 Cluster Analysis and the GT Problems

Four main classes of the clustering problem

have been reported [42]: Hierarchical, Non-Hierarchical,

Statistical and Non-Statistical. The Hierarchical problem

concerns the grouping of objects at different levels of

similarity so that the resulting cluster is a tree-like

structure. The Non-Hierarchical problem considers all

levels of similarity simultaneously such that the resulting

partition, G, does not have hierarchical structure.

Page 17: Annrovp - TDL

Classification of the clustering problem may also be

in terms of whether the value of attributes are random vari­

ables or not. Thus, there is a Statistical grouping problem

where some or all of the attributes (or their values) are

not known with certainty. The grouping of the objects is,

therefore, based on a random sample and the values of attri­

butes approximated by a known distribution. In the Non-

Statistical problem all of the attributes and their respec­

tive values are known. No statistical distributions are

involved in the grouping process since actual values are

available for use.

In the GT grouping problem all the attributes of parts

and their respective values are known with certainty. A so­

lution to the Hierarchical grouping usually results from

solving the Non-Hierarchical problem at several levels of

similarity. Thus, the Hierarchical problem is a much larger

problem to solve. Besides, the reports on production group

formation tend to suggest no tree-like structure [6, 35, 37].

In order to facilitate a (general) solution approach, the GT

problem in this study will be formulated as a Non-Statistical

and Non-Hierarchical grouping problem. Some GT grouping al­

ternatives will now be described.

1.2.2 Group Technology Types

GT problems may be classified in three ways:

(1) type of group membership, (2) type of attributes used

Page 18: Annrovp - TDL

in grouping parts, and (3) nature of the problem. Within

the group membership classification there are three cate­

gories. The Exclusive Membership type concerns the grouping

of parts such that an individual part is processed in one

and only one group. The Non-Exclusive Membership type differs

from the Exclusive Membership in the sense that parts can be

processed in more than one group. The Hybrid type is a com­

bination of the conventional arrangement and either the Ex­

clusive Membership or the Non-Exclusive Membership types.

In the attribute-based classification three types

can be identified. There is a Parus Classification Analysis

(PCA) which concerns the use of shapes, sizes, materials,

processing features and accuracy requirements as the basic

attributes for finding production groups. It is based on

the rationale that parts which are similar in the listed

characteristics may have similar set-up problems. The

grouping criterion in the PCA approach is the minimization

of machine set-up times.

Another attribute-based classification is known as

Production Flow Analysis (PFA). It uses information in

process routes to find groups. Parts that use identical

machines form a production family; the grouping objective

is the minimization of additional machines. There is also

a Combined Approach which combines all the characteristics

used in both the PCA and PFA to find production groups.

Page 19: Annrovp - TDL

There are two subclasses of GT problems whose classi­

fication is based on the nature of the problem. They may

occur with any of the classes described above. The Uncon­

strained problem occurs when parts are to be grouped with

no prior restrictions imposed on the ntimber of groups. The

GT system designer has the freedom of solving for the ntimber

of groups. The Constrained problem occurs when the number

of groups is restricted to a known value before grouping

takes place.

Whether the Constrained or Unconstrained problem occurs

in a particular case is a management choice. In the Uncon­

strained problem the system designer has a higher degree of

freedom in terms of an optimal solution. This may be the

case where product design, process planning, plant location,

etc., are variables. In changing over to GT from a conven­

tional system, the mentioned factors remain relatively fixed.

Since a large ntimber of groups may mean a substantial increase

in labor, equipment, workspace requirements and energy con-

stimption, management is likely to exercise its influence by

imposing a restriction on the number of groups; these require­

ments may involve considerable investment. Consequently, the

more practical problem to solve in industry is likely to be

the Constrained problem. Besides a solution to the Constrains

problem may provide an insight to the Unconstrained.

Page 20: Annrovp - TDL

8

Besides the Constrained problem, the Hybrid, Non-

Exclusive Membership and the Combined types appear equally

interesting. However, there is very little background in­

formation available in the literature on these problems.

The Hybrid and Non-Exclusive Membership were suggested by

Purcheck [37] and the Combined by Gallagher and Knight [16],

but were not pursued. Most of the reports of application on

GT make mention of the Exclusive Membership, PCA and PFA.

Because of resource limitations, this research will

be restricted to the Constrained, Exclus5.ve Membership, PCA

and PFA GT types. The research problem is given a formal

definition in the section that follows.

1.2,3 Summary of Research Problem

In compact form, the grouping problem is as

follows: Optimize V(G)

Subject to

n Z n, = n

k=l ^•

(1.1)

, ^ f, for all k (1.2) °k

gPg- = f» fo^ ^^ ^ ^^ k, i 7 k (1.3)

G = {g^} (1-^)

k = 1, 2, 3,...N (1.5)

Page 21: Annrovp - TDL

where

f = an empty set

n ^ = number of parts in group k

n = total number of parts in the grouping problem

^k ^ ^^^ °^ parts in production group k

G = the set of production groups or physical representation of the GT system

and N = fixed ntimber of groups.

Observe that Equations (1.1), (1.2) and (1.3) specify

the Exclusive Membership problem, while Equation (1.5) states

the Constrained problem. In the PCA, the objective function

"optimize V(G)" may imply minimizing total machine set-up cost

or minimizing the ntimber of additional machines in the PFA.

1. 3 Review of Past Studies

The selection of attributes for the purpose of defining

similarity of parts, method of coding grouping data and group­

ing procedures for the Unconstrained problem have received

major emphasis in the GT literature. These and other related

subjects will be reviewed in the following paragraphs.

(i) Choice of Attributes (of Parts for Grouping Pur­

poses) . The use of design features as the basis for grouping

similar parts was introduced by Mitrofanov [33]. His ori-

ginal idea of Group Technology was to reduce set-up times by

setting up one machine to process parts with similar features.

It did not include the subdivision of parts and machines into

Page 22: Annrovp - TDL

10 mutually exclusive production cells. It was Opitz of West

Germany [34, 34"] who first introduced the concept of pro­

duction cells--a set of similar parts and the facilities

that process them. To facilitate the process of sorting

similar parts into groups on the basis of design features,

Opitz developed a comprehensive system for coding and

classifying parts [36]. It was the use of Opitz' system

for coding and classifying parts into groups that was

earlier referred to as Parts Classification Analysis (PCA)

approach.

In contrast to the Opitz', Burbidge's definition of

similarity was based on information from process routes [6,

7]. In this approach, parts which use identical machines or

have similar flow patterns are sorted into one production

group. This is the Production Flow Analysis (PFA) mentioned

earlier. A similar approach was suggested by El-Essawy [12].

Associated with the approaches of Opitz and Burbidge,

respectively, are two formats for coding and presenting

grouping data: Parts Coding and Classification System and

Binary Matrix.

(ii) Parts Coding and Classification Systems. A

coding and classification system is a scheme that specifies

important design characteristics of parts by means of sym­

bols. With such a scheme a part can be described by a code

ntimber whose individual digits represent particular charac-

Page 23: Annrovp - TDL

11

teristics. A level of variation of each characteristic is

indicated by the value that appears at the corresponding

digit position. The first coding and classification system

was developed by Opitz [34, 35, 36]. In Opitz' system,

high values in a digit position indicate complex features

while low values represent simple ones. Thus, the simple

part in Figure 1 may be represented by the code ntunber

00100, where, starting from the left-hand side,

0 indicates rotational part with length/diameter <_. 5

0 indicates machined constant diameter

1 indicates smooth bore

0 indicates plane surface machining

0 indicates no other holes.

Since Opitz' system, other researchers have devel­

oped several coding and classification systems [12, 4, 14,

18, 23, 28]. Two types of coding systems can be identified:

a monocode and a polycode. In a monocode, the meaning of

a digit in the code number depends on that of the previous

digit (much like the nodes of a decision tree). For the

polycode, the meaning of a digit is independent of other

digits. Polycode systems are suitable for Group Technology

manufacturing purposes, while monocodes are best for design

purposes [4, 11]. The Opitz' system is a polycode utilizing

Page 24: Annrovp - TDL

•" t V

^im

12

Figure 1. Disk-Like Part

Page 25: Annrovp - TDL

13

nine digits. Other polycodes are MICLASS [23], CODE [ll]

and TEKLA [ll]. An example of a monocode is the Brisch

system [l8].

A comparative study of the polycodes by Eckert [ll]

showed numerous commonalities in the basic characteristics

chosen to describe a part. Differences are evident only in

minute details. However, Eckert reported that the MICLASS

and Opitz' systems code the most relevant and greatest con­

tent of information. Another polycode system not mentioned

by Eckert, but which appears suitable for production plan­

ning, is the SAGT coding system [14]. Researchers have

used coding and classification systems in various GT rela­

ted studies. Coding systems have been used in product de­

sign [18], grouping of parts [22, 35], determination of

machining parameters [14], and analysis of component statis­

tics [17, 28].

In this study, the Opitz system will be used to code

parts in the PCA approach because it has been used by others

[22, 34], it is well-doctimented, and the author is familiar

with the system. The Opitz system is described in greater

detail in Appendix B. An example of Parts Characteristics

Matrix (PCM) coded with Opitz' system is presented in Figure

1.1.

Page 26: Annrovp - TDL

Parts 1

1

2

3

0

3

2

Design & Processing Features

2 3 4 5 6

0

2

1

1

0

2

0

3

1

1

1

2

0

14

n 0

Figure 1.1. Parts Characteristics Matrix (PCM) showing only design and processing features. Opitx coding system is used.

Parts 1

Machines in Process Routes

2 3 4 5 M

1 1 0 0 1

1 0 1 1 0

0 0 1 0 . . 1

1 0 0 0 0

1

2

3

4

0

1

1

1

n 0 0 0

Figure 1.2. Parts Characteristics Matrix (PCM) showing only machines indicated in process route

Page 27: Annrovp - TDL

15

(iii) Binary Matrix for the PFA. The system for

coding the PFA is relatively simple. Most of the data re­

quired to form production groups have been presented in the

form of a Binary Matrix [6, 7, 37]. An example is shown

in Figure 1.2. This is another form of the PCM matrix

where the coltimns indicate the machines and the rows repre­

sent parts and the elements indicate whether a part uses a

particular machine or not.

Thus,

(PCM)^j -

1, if part i is processed by machine.

0, otherwise.

(iv) Grouping Procedures. The choice of attributes

and a system for coding them provide a basis for the identi­

fication of similar parts. The assignment of parts into

groups is accomplished with a grouping procedure. One ap­

proach by Mitrofanov [33] makes use of a Composite Part

Concept. A composite part is one whose attributes are care­

fully chosen to represent those required in a production

group. All parts with the set or subset of the representa­

tive attributes are assigned to a group.

A similar technique called Code Ntimber Field Approach

was utilized by Opitz [35]. This approach is similar to the

Composite Part Concept except that the chosen attributes are

Page 28: Annrovp - TDL

16

represented in form of an Opitz' code number. A complete

grouping of parts may involve the formation of several Code

Ntimber Fields and assignment of parts in an iterative manner

until all parts belong to groups. It is a manual procedure

requiring production know-how and intuition. It only ap­

plies to the PCA approach and the technique implicitly as­

sumes the Unconstrained problem.

In the PFA, some computer grouping procedures were

reported by Purcheck [37], Carrie [8], Rajagopalan and

Batra [38], respectively. Purcheck's is an optimization

approach using process route as variables and linear pro­

gramming techniques to find production groups. The objec­

tive is the minimization of additional machines. Purcheck

reported that his approach can solve only small problems

[37].

The procedure reported by Carrie was developed by

Ross in plant taxonomy [39, 40]. This approach requires

the computation of similarity indices or coefficients.

Similarity indices indicate the strength of similarity be­

tween any pair of parts. Based on such indices, "a nearest

neighbor" method in Cluster Analysis was used to assign

parts to groups. Rajagopalan's approach, which also utili­

zes similarity coefficients, differs from Carrie's in that

a graph-theoretic technique replaces the nearest neighbor

grouping method. Like the Code Number Field by Opitz,

Page 29: Annrovp - TDL

17

these PFA approaches are designed for the Unconstrained

problem. Because the use of similarity coefficients is

important in grouping problems, it will be further dis­

cussed.

(v) Similarity Coefficients. Most techniques for

solving Clustering problems make use of similarity coeffi­

cients. A similarity coefficient is a quantitative repre­

sentation of the similarity between a pair of parts. There

are two types: association or distance coefficients. Dis­

tance coefficients measure how far apart two objects may be

in terms of the value of corresponding attributes. They

are applicable to attributes whose values can be measured.

On the other hand, association coefficients measure the

degree of similarity between entities with binary or ranked

attributes [3, 44]. Thus, in addition to the PCM matrix,

a grouping problem may be presented in the form of a simi­

larity coefficient matrix. It was in the form of a simi­

larity coefficient matrix that Carrie [8] and Rajagopalan

[38] represented their respective GT grouping problems.

Both researchers used association coefficients which may be

regarded as a transformation of the PCM matrix. In trans­

forming the PCM matrix to a similarity coefficient matrix,

a function is often used. Thus, let R^. be the similarity

coefficient between parts i and j. In Cluster Analysis,

R . has been defined in several ways. One most commonly

used for multi-attribute problems is defined as follows:

Page 30: Annrovp - TDL

18

A(B.nB.) R., - . - \ . (1.6) ^j X(B^uBj)

, • » •

where

B. = set of attributes possessed by object (part) i

B. * set of attributes possessed by j

and, X implies number of objects in the set.

Equation (1.6) expresses R.. as the ratio of the ntimber

of attributes common to parts i and j to the total nuniber

of distinct attributes of both parts. This definition was

the form used by Carrie in GT and has been reported as

being used in several disciplines of Cluster Analysis [5,

25, 41].

The properties of R.. are as follows:

(a) 0 < R^j < 1

(b) R. . =• 1 means maximum similarity

(c) R. . = 0 means minimum similarity

(d) R.. = R..; symmetric property

(e) R. = 0 , for convenience.

The complement of similarity is the dissimilarity. Thus,

if d.. is the dissimilarity between parts i and j, then, ij

Page 31: Annrovp - TDL

19

^ij = 1 - Rij (1-7)

In some Clustering problems, the dissimilarity coefficients,

instead of similarity coefficients, are used [25].

1.4 Critique of Previous Work and Study Objectives

It appears that the two sets of characteristics for

defining similarity in the PCA and PFA, respectively, are

well received by researchers [17, 22, 27]. This seems to

be the case with the methods of coding the attributes.

However, the literature is sparse on experimental or analy­

tical studies concerning the behavior of GT systems. Know­

ledge of the behavior of different GT alternatives in vari­

ous conditions may be valuable for GT system designers.

Though extensive information concerning the benefits of

Group Technology is reported in the literature, it appears

to be accoijnts of empirical examples [2, 32]. The problems

created by this lack of experimental approach was stimmarized

by one GT expert as follows: "Attention has been drawn to

the possible dangers of developing general solutions from

basis of particular examples. In some GT cases, assump­

tions have been made empirically or on narrowly-based data,

and have led to fallacious conclusions. Much which has

been written about set-up costs falls into that category...

Each of the hoped for benefits should be tested carefully

Page 32: Annrovp - TDL

20

for validity in light of particular circtimstances of the

manufacturing concerned before a decision to implement

cell production is taken" (Craven [9]).

Several factors may be responsible for this lack

of interest in experimental or analytical studies. One

possible reason is the absence of efficient grouping

techniques suitable for most GT alternatives and range

of problem parameters. For instance, there have been no

reports of industrial applications of the computer grou­

ping procedures previously discussed. One possible reason

may be that they are intractable for practical problems.

Besides, they cannot solve uhe Constrained problem. Re­

call that a problem is considered constrained when the

ntimber of groups are specified. The importance of the

ntimber of groups as a parameter in GT was referred to in

Section 1.2. Thus, an algorithm for the Constrained

problem may be capable of solving problems over a wide

range of system parameters. The development of a grouping

algorithm for the Constrained problem both in the PFA and

PCA will be pursued in this study.

The grouping problem stated in Section 1.2.4 is a

combinatorial optimization problem; a systematic solution

may require an explicitly-defined objective function. A

function which introduces evaluation difficulties may be

inefficient. On the other hand, one with large storage

Page 33: Annrovp - TDL

21

requirements may be infeasible for computer procedures.

The use of an explicitly-defined grouping objective was

not reported in the GT literature. The discussed proce­

dures of Carrie [8] and Rajagopalan [38] did not state

any objective function. That of Purcheck [37] implied

the minimization of additional machines but was not ex­

plicitly defined.

In this study, explicitly-defined objective func-

,tions for the PCA and PFA, respectively, will be formu­

lated. ^

A solution to the grouping problem only sets the

stage for GT implementation. The success or failure of

such systems may depend on control of operations and the

structure of the problem. The structure of some problems

may be such that similar parts exist in numbers enough to

form profitable production groups. In other cases, parts

may be dissimilar in all respects, therefore unsuitable

for GT. If this is true, then some characterization of

GT problems may be useful in a preliminary attempt to

decide if a conventional batchshop should be converted to

GT. Such a characterization will be one of the study ob­

jectives of this research. No such study was reported in

the literature.

A knowledge of any relation between ntimber of groups

and production costs may be valuable in studying GT system

behavior. An increase in ntimber of groups, for instance.

Page 34: Annrovp - TDL

22

may necessitate extra machines, labor and space. Extra

machines may, in turn, affect machine loads, energy con-

stimption, in-process inventory, investment cost, etc.

The possible impact ntimber of groups may have on produc­

tion costs was ignored by past studies. One of the ob­

jectives of this research is to investigate if a useful

relationship exists between ntimber of groups and produc­

tion costs.

In stimmary, then, the objectives of this study are:

(1) Formulation of grouping criteria

(2) Characterization of GT problem data

(3) Development of grouping algorithm for the Constrained problem

(4) Investigate the relationship between ntimber of groups and production costs.

1,5 Outline of Succeeding Chapters

A grouping criterion is important to this study.

The characterization of GT problems and the optimality

conditions for grouping parts may depend on grouping cri­

terion. Hence, Chapter II will be devoted to the formula­

tion of grouping criteria. In Chapters III and IV, the

characterization of GT problems and the grouping algorithm

will be presented, respectively. In Chapter V, the rela­

tion between number of groups and production costs will be

discussed. Finally, the summary of research, conclusions

and recommendations for further study will be outlined in

Chapter VI.

Page 35: Annrovp - TDL

I III I I i i i i m l I I I

M#f

CHAPTER II

GROUPING CRITERIA

2.1 Introduction

The efficiency and effectiveness of a solution

technique for practical grouping problems may depend on

the choice of a grouping criterion. A criterion which

requires a large amount of data may introduce storage

problems, thereby limiting the use of computer grouping

procedures. On the other hand, one which involves eval­

uation of many terms may be inefficient. One that appears

obvious is production cost, but the difficulties associ­

ated with the evaluation of components like in-process

inventory, penalty cost for late delivery, machine idle­

ness cost, etcT, may render its use inefficient. The

purpose of this chapter is to discuss grouping criteria

in terms of production cost, storage requirements, ease

of evaluation and optimality conditions.

In the first section of this chapter, production

cost in GT will be discussed; in the sections that follow,

the formulation of other criteria which may be suitable

for the PCA and PFA, respectively, will be presented.

23

Page 36: Annrovp - TDL

24

2.2 Production Cost in GT

In studying the behavior of production cost in

machine shops, researchers have considered some of the

following cost components:

1. Machining cost

2. Set-up cost

3. In-process inventory cost

4. Penalty cost due to tardiness

5. Machine idleness cost

6.. Transportation cost

7. Investment cost for additional equipment.

Gupta and Dudek [20], for example, considered the

first five in the scheduling aspect while Iwata.and Takano

[24] used the first six in studying the process planning

problem. In the GT grouping problem researchers have con­

sidered machine set-up and cost for additional machines

as the most critical [13, 34, 37, 38]. In the PCA approach,

for instance, the minimization of machine set-up is the

grouping criterion; in the PFA approach, it is the minimi­

zation of the cost of additional machines [7, 8]. In an

integer programming model to determine which parts to pro­

duce in a GT system, Dedich, Soyster and Ham [10] formu­

lated production cost as the sum of set-up cost and cost

for additional machines. Thus,

Page 37: Annrovp - TDL

4K its-:

V V.

.•^-«3- 25

^c- S^+A^ (2.1)

where P^ is production cost, S , set-up cost and A is cost

for additional machines. These researchers argued that, for

the exclusive purpose of forming production groups, set-up

and additional machine costs are the most important compo­

nents of production cost.

The formulation of alternative grouping criteria

which may relate to machine set-up cost and cost of addi­

tional machines will now be discussed.

2.3 Objective Function in the PCA

The ideal information for grouping parts in the PCA

is machine set-up time data. However, time data often

require large amounts of storag e which may preclude compu­

ter grouping procedure for large problems. Consider a

moderate-size problem of 400 parts and 70 machines, for

instance. Seventy matrices of 400 x 400 set-up time data

may be needed in order to group parts. In practice, much

larger problems have been reported [22, 35]. Besides, it

is doubtful if such set-up time data for large batchshops

are available in industry. One GT researcher observed that

''...sequential set-up times would be a useful criterion for

part family classification, but the data are seldom avail­

able in practice" [47].

Page 38: Annrovp - TDL

26

To overcome these drawbacks, researchers have used

"similarity of parts" as an equivalent criterion. Using

this criterion, parts are grouped so that those processed

in one group are most similar with respect to shape, size,

material, finish requirement and surface to be machined.

Implicit in this approach is the assumption that maximi­

zation of group similarity is equivalent to the minimization

of machine set-up time. That this appears to receive uni­

versal acceptance may be judged from ntimerous statements

of the following type in the literature: "The similarity

of components within a family allows resetting times be­

tween batches to be minimized bv the use of rationalized

tooling arrangements" [27]. A real-life experimental

verification was reported by White and Wilson [47]. These

researchers demonstrated the minimization of machine set­

up time by processing parts in subset such that the ma­

chine is set-up for each subset of similar parts. The

small size of the problem (6 parts) allowed similar parts

to be identified by inspection. However, a mathematical

model relating set-up time to similarity of parts has not

been reported. Formulation of such a model is now dis­

cussed.

2.3.1 Set-Up Time -Similarity of Parts Model

Let f(t, H, Q) be a function relating set-up time

to similarity of parts. Thus,

Page 39: Annrovp - TDL

27

S^ - f(t, H, Q) (2.2)

where

and.

t

H

Q

s.

= time parameter

= a measure of similarity of parts in a group

= parameter relating t to H

= total set-up time.

To explain this model it is asstmied that the total time

it takes to set-up a machine for a set of parts can be

separated into sequence dependent and sequence indepen­

dent components. The sequence independent component which •I -w

may include time to adjust workpiece, tool, etc., is a

constant. The sequence dependent may be a variable for"a

given problem; examples may include time to change cutting

tool, jig, fixture, machining parameters, etc. We restrict

the discussion to the sequence dependent set-up time.

Now, consider the problem of setting up a machine

for a given operation. Depending on the particular machine

and the characteristics of the part concerned, some or all

of the following set-up tasks may be performed.

1. Change jig

2. Change fixture

3. Change attachment

4. Change cutting speed

Page 40: Annrovp - TDL

28

5. Change feed rate

6. Change depth of cut

7. Change cutting tool

8. Others.

The total amount of set-up time may depend on the type and

number of set-up tasks performed (or not performed). Thus,

for a given problem, a set-up task may asstime one of two

states: "performed" or "not performed". Let e, represent

the state of task k and E, the set of elements, {e,^}.

e« is defined as

|l, task k is performed

.0, task k is not performed

and

E =

re-

le V

Page 41: Annrovp - TDL

29

where v is the total number of tasks. Let t, represent the

time iinit it takes to perform task k. Also, let T be a row

vector representing the set of time units corresponding to

the set of tasks. Thus,

T - (t^, t2,..., t, ,... t ) (2.3)

th

The set-up time, S.. , for the m operation of part i

after processing part j on the same machine can be expressed

as the vector produc t of T and E.

Expressed in terms of the elements of T and E

^ijm " ^1^1 •*" ^2^2 " • • - k k- * * " vS

V - Z t,e (2.5) k-1 ^ k /

The above equation shows that the set-up time of any opera­

tion is the sum of the product of the time taken to perform

individual tasks and the corresponding status of the task.

The value of t, will be independent of the particu­

lar part being processed. It depends on the machine used

and can be determined by motion and time study. For any

given machine t, remains approximately constant. Hence,

sequence dependent set-up times for a fixed process route

Page 42: Annrovp - TDL

30

problem depends on the vector E; it will be called Task

Elimination Vector since the number of zero elements pre­

sent indicate the number of set-up tasks which do not have

to be performed. E can be expressed as a function of the

similarity of parts as discussed in the following section.

(i) Derivation of Task Elimination Vector, E.

Whether or not a set-up task must be performed, e, , depends

on the similarity in one or more characteristics of two

parts processed in sequence. For example, the task "change

jig" may be eliminated if two parts processed in sequence

are similar in overall size and shape. But similarity in

material type may not cause "change jig" to be eliminated.

Thus, considering "yes" or/"no" answers to questions of the

following type, "can similarity of parts i and j with re-

spect to the a characteristics cause the elimination of

set-up task k when both parts are processed in sequence?",

a binary relation between set-up tasks and characteristics

of parts can be defined. Let q^^ represent the binary

relation.

'l, if task k can be eliminated by similarity in characteristics a

^ka 0, if task k cannot be eliminated by similarity in characteristics a

Page 43: Annrovp - TDL

31

Let Q represent the set of elements {q ^ }. Hence, Q is a

V x u binary matrix where v is the number of tasks and u

the number of characteristics. Matrices that define bi-

nary relations between two sets of quantities are used

extensively in System Analysis [42], An example of Q

involving seven set-up tasks and four characteristics of

parts is presented in Figure 2.1. As shown in the figure,

Q implies a potential state indicating which set-up task

can or cannot be eliminated due to similarity of parts

with respect to characteristics.

The phrases, "potential state" and "similarity of

parts" are important. They imply that the elmination of a

set-up task cannot take place without a state of similarity

Let this state of similarity between parts i and j, to be

processed in sequence, be designated as D... D.. is a Dis-

similarity State Vector with u elements; each element ^^i.

represents the state of dissimilarity of i and j with re-

spect to the k characteristics. Vector representation of

similarity (dissimilarity) is common in Cluster Analysis

[1, 3]. Thus,

1, if part i is dissimilar to part j with respect to the k h charac-

_ teristic 6. ., = \ J* 0, if part i is similar to part j

with respect to the k^^ charac­teristic

Page 44: Annrovp - TDL

.

-

Machine Set-Up Tasks

Change jig

Change fixture

Change attachments

Change cutting tool

Change cutting speed

Change feed rate

Change dept of cut

Characteristics of

Parts

OV

ER

AL

L SH

AP

E

& S

IZE

I-l

1

0

0

0

0

0

)IM

EN

SIO

NS

&T

YPE

3F

FEA

TUR

ES

TO B

E

0

0

1

1

0

0

0

MA

TE

RIA

L T

YP

E

0

0

0

1

1

1

1 F

INIS

H A

ND

T

OL

ER

AN

CE

0

0

0

0

1

1

1

0 =

1

h d ^ 0

0

0

0

0

0

1

1

0

0

0

0

0

0

1

1

1

1

0 1

0

0

0

1

1

1

Figure 2.1. Set-up task-characteristic Binary Interaction Matrix.

Page 45: Annrovp - TDL

and

33

^ij2

D. . « 13

^ijk

6. . I 3.JUJ

If i and j are dissimilar in all characteristics, then all

the elements of D. . will be 1; they will be zero if i and

j are identical.

E, the Task Elimination Vector, depends on the two

states: Q and D..; Q specifies the potential tasks that can

be eliminated while D.. ensures (or fails to ensure) the

e 1 iminat ion. Thus ,

E = Q X D^. (2.6)

where the operator "x" is a boolean multiplication. With

this understood, E can now be expressed as

E = QDj, j (2.7)

Observe that the definition of Q does not depend on

any particular machine or part. It is a natural state and

Page 46: Annrovp - TDL

34

may remain constant with time. But the vector D.. is a

variable that depends on the similarity of parts processed

in sequence. Thus, Equation (2.7) shows that the number

and type of sequence dependent set-up task to be performed

in a given problem may be controlled by the choice of parts

to be processed in sequence. The choice can be done on the

basis of similarity of characteristics.

To show how similarity of parts relate to set-up

time, replace E with (QD^.) in Equation (2.4). Then,

Sijm = ^(Q°ij) ^2.8)

Since Q and T are constants, the above expression shows that

sequence dependent set-up time, S^. , is directly propor­

tional to the state of similarity, D ^ , where i and j are

processed in sequence. It also follows that

S.. = S,. (2.8a) ijm jim

since D.. = D. . and asstiming that the operation ntimber (m)

is the same for both parts. Notice that for the above

equality to hold, S^.^, S.^^must be strictly sequence-

dependent set-up times, all the sequence-independent time

elements (adjustment of cutting tool, machine tool and work-

piece; coolant related set-up tasks, set-up of machines at

the beginning of a schedule period, etc.) cannot be included

in the observations. The computation of S.^^ using Equation

(2.8) is illustrated in Example 2.1.

Page 47: Annrovp - TDL

'*, •/

Example 2.1

T = (4 , 8, 6, 6, 4, 3 , 5)

35

Q -

f 1 0 0 . 0 1

1 0 0 0

0 1 0 0

O l i o

0 0 1 1

0 0 1 1

0 0 1 1

from Figure 2 .1

12

' 1 ^

0

1,

L 0 J

'13

f 0 1

1

0,

0

D 23

' 1 ^

0

0

121 = T(QD^2)=' ^^* ^ ' ^ ' ^ ' ^ ' ^ ' ^^

1 0 0 0'

1 0 0 0

0 1 0 0

O l i o

0 0 1 1

0 0 1 1

0 0 1 1 ^

1] 0

1

.oJ

Page 48: Annrovp - TDL

36

121 = ( . 8, 6, 6, 4, 3, 5)

'V

1

0

1

1

1

1

« 30

/

Similarly, S^^^ = 12 and ^^^ = 24.

These values show that for the first operation of ^

part 1 sequence-dependent set-up time is 30 units when

processed after part 2 and 12 units when processed after

£art 3; both parts are processed on the same machine.

For part 2, set-up time is 24 when processed after part 3.

(ii) Set-up Times and Dissimilarity Coefficients.

The Dissimilarity State Vector, D, may be adequate to re­

present the degree of dissimilarity of parts in a group

provided the ntimber of parts in the group is two. With

the ntimber of parts greater than two, some other represen­

tation may be more appropriate. One way to do this is to

replace the vector representation with its magnitude. This

magnitude, as often defined in Cluster Analysis, is similar

to the similarity coefficient defined in Section 1.2 [25].

^^n

Page 49: Annrovp - TDL

37

Thus, dissimilarity between any pair of parts (i, j) is

the ratio of the nusoiber of characteristics in which parts

are dissimilar to the total number of characteristics.

Hence,

^^ WD ^ ^io

(2.9)

where

where

• i:

w =

dissimilarity coefficient (magnitude of dissimilarity)

set of weights of the characteris tics

w, eW is the relative importance of * the k* ^ characteristic.

By definition,

"3". , eD. - 1 for all i and k. lok 10

In terms of the elements of the vectors D^. , W, D^^,

u , _ k^l ^k\lk ij " ^

k^l ''k' iok

(2.10)

where u is the number of characteristics. Now, similarity

and dissimilarity are complements [39]. Hence, both quan­

tities share common properties. If 5^^^ is the binary

variable defining the similarity of parts i and j with

Page 50: Annrovp - TDL

38

respect to characteristic k, then

«ijk - 1 - \ j k (2.11)

and dij - 1 - \ i (2.12)

where R.. is the magnitude or coefficient of similarity de­

fined in terms of the same characteristics of parts as d...

Like the properties of R.. (see Section 1.2), those of d..

are as follows:

(1) 0 < d^j < 1

(2) d. . - 1 means maximum dissimilarity

(3) d. . = 0 means minimum dissimilarity

(4) d^. - d...

Considering the proportional relation of set-up time

to dissimilarity, for two parts in a group, the sequence-

dependent time, S^. , may be proportional to d ^ provided

the weighting parameters, w^, are properly chosen. The de­

termination of the elements of the vector W will be dis­

cussed subsequently. Thus, S^j^ A d ^ (A means proportional);

or S.. A(l - R..). That is, ijm — ij

_il53L = a constant (2.13) 1-R,j

Page 51: Annrovp - TDL

39

for

0 £ R^. < 1.

Equation (2.13) shows that as similarity of parts increase,

set-up time decreases. Maximum similarity corresponds to

minimum set-up time; the converse also holds true. Hence,

Equation (2.13) provides a basis for defining similarity

based grouping performance measure.

To define such a performance measure, we rely on

the judgment of one expert in Group Technology. He stated:

"In the group or cell concept, setting up time may be re­

duced but this clearly depends on 'family homogeneity' and

its relationship to component ordering" [13, page 343].

In Cluster Analysis, homogeneity of family k is interchange­

ably used with similarity of parts in group k. It is often

denoted as H, . H, has been defined in several ways. One

most commonly used is the arithmetic average of all the

similarity coefficients in group k [3, 5, 44]. Thus,

^k ^-1 Z Z R^.

« = izUzi i (2.14)

^ n^(n^-l)

ieg^, jeg^, if j, and n^ (the number of parts

in group k) greater than 1

Page 52: Annrovp - TDL

40

For

i ^ - l . H ^ - O

Observe that

0 £ Hj £ 1 since 0 £ R^. <_ 1.

If S, is the total sequence-dependent set-up time for all

parts in group k, then

^K — = a constant. (2.15) 1-

\

where

0 £ Hj < 1.

For a group with two parts. Equations (2.13) and (2.15) are

identical. In a GT system of N groups, H^ may be approxi­

mated with H where

^ H.

H = — . (2.16)

Substituting H for H^ and S^ for S^ in Equation (2.15),

^ = a constant (2.17) 1-H

Page 53: Annrovp - TDL

41

where S^ is the total machine set-up time in the GT system

and 0 £ H < 1. Let S be the constant of Equation (2.17), o

then

S^ = S^ (1-H) (2.18)

Since H is dimensionless, S^ must have the same dimension o

as S . Hence, S is defined as the maximum possible set-up

time that can be reduced by the grouping of parts; it cor­

responds to H = 0, the worst possible grouping of parts.

Operationally, S may be determined by stimming up all the

sequence dependent set-up times of each operation of all

parts. Thus,

m. n 1 S = 2 2: S. „ ° i-l m=l °°

where S. is the maximtim sequence-dependent set-up time lom

for the m^^ operation of part i. Equation (2.18) can be

written in the following form:

S = S - S H (2.18a) *t o o

In this form, similarity of parts is shown as a necessary

condition for the reduction of set-up times. Thus, (2.18)

may be regarded as the fundamental equation for the design

of GT systems. It provides the theoretical basis for the

Page 54: Annrovp - TDL

42

formation of production groups.- From (2.18a), H may be

regarded as the proportion of set-up times that can be re­

duced by grouping similar parts (0 <_ H < 1) . Hence, to

form optimal production groups in the PCA, it may be ade­

quate to maximize H, The maximization of H has been used

as grouping criterion in Cluster Analysis [3, 5],

Given that parts must be processed in groups, the

maximization of H is necessary but not sufficient condi­

tion for the minimization of set-up times. Sufficient and

necessary condition requires optimal sequencing of parts

within the production groups. Thus, the maximization of

H ensures the best environment where minimum set-UD time

can occur while optimal sequence of production, in a heu­

ristic sense, guarantees the corresponding minimtim set-up

time. While the maximization of H is the subject of

Chapter IV, the solution to the sequencing problem is be­

yond the scope of this study.

(iii) Relative Importance of Characteristics.

The weight parameters, W, introduced in the definition

of dissimilarity are intended to reflect the relative

importance of the characteristics of parts. Equation

(2.9) is repeated here to facilitate the discussion.

Page 55: Annrovp - TDL

W D^.

ij ' r ^ 10

The weighting of characteristics, though not reported in

Group Technology, has been used in most Cluster Analysis

problems [44]. The values of the elements of W may be

chosen subjectively, as done in Cluster Analysis [44], or

objectively. In GT it may be logical to relate the weight

or importance of a characteristic to the impact it will

independently have on set-up time. One way to do this is

to define the weight of the k characteristic, w, , in

terms of the set-up time that may result if a set of

parts are dissimilar in k, but similar in other charac­

teristics. This definition will now be expressed formally.

(iv) Definition of Weight Parameter. Let D..(w,) ij K

denote the Dissimilarity State Vector such that parts i

and j are dissimilar in the k characteristics but simi­

lar in others. Expressed in terms of the four characteris­

tics shown in Figure 2.1, overall shape and size, surface

to be machined, material type and accuracy requirement.

43

Page 56: Annrovp - TDL

44

Di^(wp -

1

0

0 D..(W2) =

0

1

0

0 ^ J

. Dij(w3) =

0

0

1 °ii^V =

0

0

0

With D^. (W| ) defined, w^ may be given by the following ex-3-J

pressIon:

w ^ = T [QD (w^)] (2.19)

Since Q is a constant and D. . (w, ) is specified, w, will de-

pend on the choice of T, the time elements of machine set-up

task. The elements of T may be pooled over all or key ma­

chines of the production system. That is, t, eT may be esti­

mated as the average of set-up time element corresponding to

the set-up task k of all or key machines. From Figure 2.1,

n 0 0 Ol

1 0 0 0

0 1 0 0

Q = 0 1 1 0

0 0 1 1

0 0 1 1

lo 0 1 1

Page 57: Annrovp - TDL

45

S u b s t i t u t i n g Q and D^j(w^) i n Equation ( 2 . 1 9 ) ,

f l

w, = T 0

0

1

1

0

0

0

0

0

0

1

1

1

1

0'

0

0

0

1

1

1

1'

0

0

.0.

« T

1

0

0

0

0

w^ - (tj^, t2f t^, t^ , t^, t g , ty)

1

0

0

0

0

w = t^ + 4 - (2.20)

iMil:

Page 58: Annrovp - TDL

46

Similarly,

W2 = t3 + t

^3 ' H + ^5 + ^6 + ^7

^4 ' ^5 " *6 + ^7

(2.21)

(2.22)

(2.23)

A close examination of these equations and the matrix Q

shows that the relative importance of a characteristic is

measured by the sum of the times taken to perform the set­

up tasks it can affect. This appears reasonable since the

importance of a characteristic should not depend only on

the number of tasks it can affect but also the time taken

to perform them. An illustrative example follows:

Example 2.2: Determination of Relative Importance of Characteristic and Dissimilarity Coefficients

From Example 2.1,

T = (4, 8, 6, 6, 4, 3, 5)

12 , 0^3 = , D 23

'V

Computing the weight of characteristics using

Equations (2.20) to (2.23),

Page 59: Annrovp - TDL

47

Wi

w#

t, + to = 4 + 8 = 12

t3 + t^ = 6 + 6 = 12

Wo = t4 + t^ + tg + t^ « 6 + 4 + 3 + 5 = 18

^A "= tc + t. + t^ « 4 + 3 + 5 - 12 •6

W = ^ (w^, W2, W3, w^) « (12, 12, 18, 12).

In this particular example it is seen that the third char­

acteristic is most important; the other three are of equal

importance.

(12, 12, 18, 12)

From Equation (2.19), d,2 =

(12, 12, 18, 12)

0

1

1

1

1

30 52r = .555

Similarly, d^3 = .222 and d23 = .444. These values indicate

that parts 1 and 3 are the least dissimilar (most similar)

pair followed by parts 2 and 3; parts 1 and 2 are the most

dissimilar. These relative values may be used to determine

the grouping of parts or sequence of production at machine

centers.

Page 60: Annrovp - TDL

48

(iii) Asstimptions. The following are the summary

of the asstimptions of the Set-Up Time- Similarity of Parts

model of Equations (2.8) and (2.17).

1. Ntimber of groups, N, remain fixed

2. The process route of parts is known

3. The GT problem is the Exclusive Membership type

4. Similarity of parts is defined in terms of

machine set-up related characteristics (the

PCA approach)

5. Machines are grouped so as to process all the

operations in a group; all the parts use the

same set of machines

6. The time to set up individual set-up tasks

- is independent of the part being processed

7. Initial machine set-up at the beginning o.f. a

schedule period is aimed at the operations of

all parts in the group

8. All n parts are processed in each schedule

period.

The above assumptions describe the ideal conditions

in which the Set-Up Time - Similarity of Parts model apply

With the relaxation of some of these asstimptions, devia­

tions from the expected behavior of the model may be ob­

served. For instance, if assumptions 6 and 7 are relaxed,

then the equality of Equations (2.8) and (2.8a) may no

Page 61: Annrovp - TDL

49

longer hold in a strict sense. In a group where most of

the parts are dissimilar, the seventh condition may be

difficult to attain. The model expressed in Equation

(2.17) may also need a modification when assumption eight

is relaxed. If some of the n parts which were grouped

initially are not processed in a schedule period, then the

value of the constant, S^. in Equation (2.17) may be modi­

fied (see the definition of S ). o

The first to the fourth assumptions merely describe

the problem of this research in the PCA approach. The

significance of the fifth asstimption will be discussed

in a subsequent chapter.

2.4 Grouping Objective for the PFA

The minimization of additional machines is the group­

ing criterion in the PFA. Additional machines may be re­

quired because two or more parts processed by a machine in

the conventional case belong to different groups in the

Exclusive Membership GT. The ntimber of additional machines

may depend on the ntimber of groups in which individual

types occur and the ntimber of each machine type available

in the conventional system. Thus, if y^ is the number of

additional machines of type m, then

Page 62: Annrovp - TDL

50

y™ *=

d™; d >o m m

m " 10; d <

(2.24)

rJ^

where d is the difference between the number of groups

that require machine type m and those in the conventional

system, b . d is given by

n d = 2: a , - b„ (2.25) m ^_^ mk m

where

^mk

1, if machine type m occurs in group k

0, otherwise.

The total number of additional machines A is defined as

the stim of y^. Hence, •m

M A = S y„ (2.26)

m=l ^

where M = t o t a l ntimber of machines in the conventional

system.

In the case where the ntimber of each machine type

is one (b = 1) in the conventional shop, A is known as m

the number of overlapping attributes or the link between

Page 63: Annrovp - TDL

51

groups in Cluster Analysis. If A equals zero, then there

is no linkor__037erl3^ between the groups. The greater the

value of A, the greater the link. Very often, the measure

of link between, say, group k and the rest of the groups

is defined in terms of similarity coefficients. Thus, if

L, is the link between group k and the rest of the groups,

in terms of similarity coefficients,

E^ E R^. L = ^=^ J=^ i- (2.27)

n^(n.n^)

where ieg , j? gk, n = total number of parts, g^ = group k

and n, = number of parts in group k.

To explain how A and L, may relate, consider parts

i and j where part i belongs to group 1 and part j to

group 2. Let the route of part i be defined by the vector

r. and r. that of part .. For illustration let

r^ = (1, 2, 3, 4)

r. = (2, 3, 4, 5, 6, 7)

b = 1 for all m m N, ntimber of groups = 2

A(i,j) = number of additional machines due to the grouping of i and j.

Page 64: Annrovp - TDL

52

By inspection of the routes, r. and r.,

A(i,j) = 3.

Machines 2, 3, and 4 will be duplicated. Now, R.., the

similarity coefficient of both parts, is defined as follows:

X(rnr.) R. . « =—J— [from Equation (1.6)]

J X ( r U r j )

X(r^ ^j) = 3 ^

' X here means ntimber of machines X(r^ r^) = 7

^ i j - 7-

If parts i and j are the only parts to be grouped, then

L^ - L2 - RjLj 7 7 •

For n parts and N groups, the link between group k and

other groups is defined as the average similarity coef­

ficient , R.., for i in group k and j in groups other than

k. For the N groups, the average link, L, is approximated

as follows:

N

T = ^^i^^ (2.28) ^ N

Page 65: Annrovp - TDL

53

Though a mathematical relation between A, the number of

additional machines, and L, the link between groups, is not

immediately clear, the illustrative example tends to sug­

gest that the minimization of L may be close to the minimi­

zation of A. The minimization of L has been used as an

alternative grouping criterion in Cluster Analysis [3, 44].

To verify the effectiveness of using either L or A

as a grouping criterion, a problem of n = 43, N = 5, and

M = 16, reported by Burbidge [6, page 172] was solved. An

algorithm presented in a subsequent chapter was used to

minimize A and L, respectively. The resulting partitions

are presented in Figures 2.2 and 2.3.

The number of additional machines in the direct mini­

mization of A were four and nine for the minimization of L.

However, the minimization of L corresponds to Burbidge's

solution. As displayed in Figure 2.3, the distribution of

parts in the minimization of A may not be suitable for GT

production. A ntimber of 39 parts belong to one group while

the remaining groups have one part each. This is in sharp

contrast with Figure 2.2 where parts are evenly distributed

among the groups. From this example, the use of L may be

better than the direct use of A in problems where the range

of number of operations is very wide. In the Burbidge's

example, the parts in "lone" groups have one machine each

while most of those in the large groups have larger number

Page 66: Annrovp - TDL

54

Parts

4 Ifi 77 ?fi

1 7

, 3 "^

.1 fi 7 P 9

10 n 1? i:^

u T^ 17 i f l 1Q ?n

. ?i 7? ?4 ?«;

9 1

?7 ?P 1 ?Q 1 30 31 37 33 3d 35 3fi 37 3R 39 40 d1 4? 43

5

1

12 91

1

a5

10

q?

1

1

93

1

2

q4

1

1

1

1

1 •

*

3

1

1

1 1 1

M .

4

T

1

1

1 1

1

A ( • 5

T

1 1

1 1 1

1

1 1

1

1 1

1

1

: H

6

1 1

1

•1

T

1 ]

T

1

1 1

1

• •

1 1

I 7

1

1

1

N

' 8

1 1 1

T

1 1

1 1 1 1 1

1 1

1

1 1

1

1

E i

9

1

1

T

1

1

1 1

*

10

1

T

1

"

11

T

1

1

1

1

• 12

1

1

1

1

13

1

1

14

^ "

1

15

1

1

16

1

1

" 1 1

..,I_

1

1

1

1

1 - l .

^Additional machine

Figure 2.2. Grouping solution resulting from minimi­zing A.

Page 67: Annrovp - TDL

55

Parts

1 s

14 1 0

no -7T

^

'-1 - ^

' f^

I S

/^9

'-]

17

"? )

^ ^q i,q

6. ^e ;

7? 1

•30

26 • ?

" i T O

'^^ ^ T

T -?n

-n . ' > ' '

IS 1 7

•iL

TA 7

U

- L 1 1 1 1

i 1 1

1

2x 1 t

•>

8 1

1

1

1 1 1

1

1 1 1

151 1, 1

] 1

,L. 1 ,1

1

1 1

1 1

1 1

1 t

6|11| 2

1

H 1 1 1 !

1 ' ' - ' 1 '

1 1 1

1

1

_L

i

1

I 1

1 1 1 1

1

1

1

i

1 1

1 1 \ ]

[ 1

1

1

1

1

l | 9 1

1

i

1

^

I

1 _L

I 1 1 1

f 1

f 1 1

\

1

1

M

16

i

1

A (

1

1

T L 1

r ^ L

]

1

1

1 1

i

1

L I

14

J,

N

7

1 1 1

1

E S 10| t

1

1

1 7

1

f

I 1

T

t

i

\

l l r l 2

1

_L 1 T

\

1

1

1

1

1

1

_ | | U | 3

1

1 1

_ L 1

1

,1-

1 1 i 1 1 1

1 ^

\ 1

^ 1 1 1

1 1 1 1

iii

—**—

t 1*6

1 1

1

1

1 1 •"• 1

1 1

i 1 1 1

1 1 1

j 1 \

i 1

1

1 •^

1

' i • 1

. . ' . .

-Ul *Addiclonal aachine

Figure 2.3. Grouping solution resulting from mini­mizing L.

Page 68: Annrovp - TDL

56

of operations. However, it must be pointed out that only

similarity in process routes was considered in the grouping

process. Considering other characteristics such as voltime

(number of units or amount of processing time) of the in­

dividual parts it is possible that the distribution of

parts per group of Figure 2.3 may be a better alternative.

The importance of other parameters in the grouping problem

will be discussed in Chapter VI.

Possible relationship between H, the group homogen­

eity, and L, the link between groups, will be discussed

in the next chapter.

j^liitBaitete.

Page 69: Annrovp - TDL

CHAPTER III

CHARACTERIZATION OF GROUP TECHNOLOGY PROBLEMS AND OPTIMALITY CONDITIONS

3.1 Introduct ion

The data of a GT problem presented in either the PCA

or PFA format may have structures which may not be obvious

by inspection. A solution to the grouping problem usually

reveals the type of problem structure which may or may not

be desirable for GT systems. For example, if there are no

related parts in a conventional batchshop, there can be no

groups in which similar parts exist. In other problems,

groups may exist such that all parts in a group are similar

with respect to all characteristics, a desirable condition

for GT. A method of characterizing problem data may help

the GT system designer identify the type of problem being

dealt with. A proper characterization may also provide a

basis for developing optimality grouping conditions in some

cases. In this chapter an attempt is made to characterize

GT problems in terms of group homogeneity and the link be­

tween groups. Also, grouping optimality conditions for

some special cases will be presented. First, a possible

relation between the similarity of parts within groups, H,

and the link between groups, L, will be explored.

57

Page 70: Annrovp - TDL

58

3.2 Relation Between Simi­larity of Parts in the Same Groups, H, and Similarity of Parts in Different Groups, L

In Chapter II, H was defined as group homogeneity

for the PCA problem and L as the measure of linkage be­

tween groups in the PFA. In general, both H and L can

be defined for the same problem in either the PCA or PFA.

To illustrate this point the definitions of H and L are

repeated for convenience.

1 N H = ^ 2 H, (3.1) ^ k=l ^

where

^k ""k-l

k n^Cn^-1) k

and H, = 0 for n, = 1.

N 1 k=l

L = i I L ^ (3.3)

where

k ^^"V 2 E R-v i^gk» J Stc ,, ,,

= i=l 1=1 ^ (3-^) \ n^ (n-n, )

Page 71: Annrovp - TDL

59

Notice that both H and L are defined in terms of similarity

coefficients, R^A'S. Since any GT problem, either in the

PCA or PFA formats can be transformed into a matrix of

similarity coefficients, both H and L can be computed for

a given problem. For every problem of n parts there are

n(n-l) similarity coefficients (excluding cases with i=j).

Let the set of R. . terms in the definition of H be R(H) and

those in L be R(L) . It will be illustrated, with an example,

that R(H) and R(L) are mutually exclusive sets.

Example 3.1:

Consider the grouping problem of five parts

(n = 5) shown schematically in Figure 3.1a; in

Figure 3.1b, they are presented in Opitz' notation.

Using the relation

R.. = 1 - d.., [see Equation (2.12)] LJ ij

7

A s £EZ± [ s e e Equation ( 2 . 1 0 ) ] ^ij 7 ^ ^

^^^k ^iok

and setting w ^ - 1 for all the seven attributes in Figure

3.1b, the similarity coefficients for all the pairs of

parts are computed as follows:

Page 72: Annrovp - TDL

60

Part 1 Part 2 Part 3

12"

Part 4

J 3%" i

U 12 It

1 ^= c ^ r—

Part 5

Figure 3.1a. Parts for Example 4.1.

Page 73: Annrovp - TDL

P a r t

1

2

3

4

5 .

T-i i^ P H tQ <U OJ

v a. N <U cd - H

>.£ cn o cn

1

(D 1

^

K2)

2

ATTRIBUTES

CO c u

4J

OJ c

Q) cd g XCOrH

w w

1

1

1

5

7

Cd u C Q) C VI OiCU <U Cd e

C cn fH M w

4

4

4

0

0

to (U CO C C (1)

•H cd a Ci-i Cd

•H PH iw ,c u om p Cd o cn

0

3

0

3

3

U CO

J2 I-l iJ o O K

2

2

2

0

0

o cd I - l u 0) 3 >

- O OJ O H J

2

2

2

3

3

cd •H Q) M a cu >. cd S

3

3

3

2

2

61

Figure 3.1(b). The parts in Figure 4.1(a) shown in Opitz' code number system.

1 ^ v ^

1

2

3

4

5

1 -

.86

1.0

0 .0

0 .0

2

.86

-

.86

0.14

0 .14

3

1.0

.86

-

0.0

0.0

4

0.0

.14

0.0

-

0.86

5

0.0

0.14

0.0

.86

-

Figure 3.1(c). Similarity coefficients of the parts shown in Figure 4.1(a).

Page 74: Annrovp - TDL

62

From the PCM matrix of the same figure, the dissimilarity

states for parts 1 and 2 are:

" 121 • 0' " 122 ' 0' " 123 ' °' " 124 "

^125 = 0. ?^26 ' ^' " 127 ^ ^

By definition

^101 •" 102 ' 103 " • • • = " 107 "

Hence,

, ^ (Ix0)-f(lx0)+(lx0)+(lxl)+(lx0)+(lx0)+(lx0) ^ 1 _ , A 12 (lxl)+(lxl)+(lxl)+(lxl)+(lxl)+(lxl)+(lxl) T ""•

R^2 * i - 12 " ^ " -- ^ " - ^

Repeating the same for all pairs of parts, the set of

similarity coefficients, R^., is computed and presented

in Figure 3.1c.

Let the ntimber of groups, N, be 3 and a grouping,

chosen arbitrarily, be

G = {(1, 3); (2); (4, 5)}

Page 75: Annrovp - TDL

R31 " \ . . _ . ^ 1 - ;; = ' - -- « 1.0

H. - 31 " ^ 13 1.0 + 1.0

63

H2 - 0.0 (1x2 = 1)

2 2

H - "1 " "2 " "3 , 1.0 + 0.0 + .86 . g2

3 3

R(H) - { 13' ^ 1 ' 45' 54}

_ 12 '*' ^ 4 '*' ^15 " 32 " 4 •*• 5 L,

.86 + 0.0 + 0.0 4- .86 + 0.0 + 0.0 ^ 233

6

L = 21 " 23 " 24 " 25 ^ 5QQ

^ 4

L « 41 " 42 " 43 '^ 51 ^52 ^53 Q^^

3 "" 6

L « .283 + .500 + .047 £77 3

R(L) = {R3 2' 14' 5 ' 32' 34' 35' 21' 23,

R24, ^25' 41' 42' 43' 51, 52' 53}

R(H) « {R;L3' ^31' ^45' ^54^

By inspection, all the terms in R(H) are different from

those in R(L). Hence,

R(H)rNR(L) - f (3.5)

where f is an empty set.

Page 76: Annrovp - TDL

64

Equation (3.5) is true for any Exclusive Membership problem

defined in terms of similarity coefficients.

A close examination shows that the number of terms

in R(H) and those in R(L) add up to n(n-l) , the total number

of similarity coefficients. Hence,

XR(H) + AR(L) - n(n-l) (3.6)

where XR(H) means number of terms in R(H) and AR(L) means

number of terms in R(L) . Now, the sum of all similarity

coefficients, R^., for any problem is a constant. Thus,

n-1 n 2 I R.. = C (3.7) i=l j=l J

where C is the constant.

Using Equation (3.7) it is shown in the following

that, in terms of R. .'s, H and L may relate. Denote the

stim of all R. . terms in H, as h, and those in the corres-

ponding L, as £. . Then,

h, = Z Z R..; i, jeg, (3.8) ^ i=l j=l ^

E Z i=l j=l

and X-j = Z^ Z R ^ J ; iegj^, j j g| (3.9)

Page 77: Annrovp - TDL

65

Also, if h is the sum of all R terms in R(H) and i the

sum of those in R(L), then

N

^ " yl.^\ (3.10)

N and i ^ ^z^z^ (3 11)

Combining Equations (3 .10) and (3 .11) r e s u l t s in

N h + il = S (h, + £, ) (3 .12)

k=l ^ ^

Equation (3.5) shows that no term in R(H) is in R(L) and

Equation (3.6) shows that the stim of the ntimber of terms

in R(H) and R(L) equals n(n-l) , the total ntimber of elements

in the matrix R. Since the stim of all R..eR is a constant,

then the expression in Equation (3.12) is also a constant.

Hence,

N n n-1 h + il = Z (h, + il, ) = Z Z R. . = C

k=l ^ ^ i=l j=l ^

(3.13)

h + il = C

Equations (3.5) and (3.13) show that h and il are complemen­

tary terms. As h increases, il decreases and vice versa.

Page 78: Annrovp - TDL

66

Thus, h and il have a negative correlation. Notice that

H^ is the average of h^ and L^ that of 1, ; similarly, H

is the average of H^ and L that of L, . Although the re­

lation established for h and il may not be generally true

for H and L, it appears reasonable to expect that H and L

may also have a negative correlation for some problems.

Such a relation and some of the properties of H and L make

them suitable quantities for the characterization of GT

problems , Recall that H and L have the following lower

bounds:

0 <_ H £ 1 and 0 1 ^ 1 1 •

From these properties, low or high values of H or L can

be easily recognized. The relative values of H and L

may be good preliminary indicators of which GT system

may or may not be economically viable. For instance,

if in the PFA, L is zero, then nonoverlapping groups

exist. Ntimber of additional machines in such a system

will be zero. If, in addition to L = 0, H is high, then

there is an indication that the parts in a group use the

same set of machines. The combination of low L and high

H is a desirable condition in the PFA, A similar case

can be argued for the PCA situation. In general, a mean­

ingful characterization of GT problems may be accomplished

using the pair, H and L.

Page 79: Annrovp - TDL

67

3,3 Characterization of 6T Problem Data

In terms of the type of the optimal groupings that

can be attained in a GT problem, four main types of GT

data can be identified. These are (1) Multiple Population

Data, (2) Universal Population Data, (3) Natural Population

Data, and (4) Null-Population Data.

In the Multiple Population Data N groups, {g^, g2,...

gj i---gjj}. exist such that each g^ has a unique set of

attributes. In terms of the link between groups, L, this

means that L = L^ = L2 = ... * I^ * ... » L ^ = 0. Under

the multiple population characterization, two types can be

identified. In one type all the parts in a group are simi­

lar with respect to all the attributes. We call this Type

I Multiple Population. In terms of H and L, the type I

Multiple Population is characterized by

H - 1 and L = 0.

In a second type. Type II Multiple Population, parts in a

group are similar in some attributes but dissimilar in

others. In terms of H and L,

0 < H < 1 and L = 0.

Examples of Types I and II are shown in Figures 3.2(a) and

3.2(b), respectively.

Page 80: Annrovp - TDL

68

Parts

1 2

8 5 U 9 3 6 7

10

1

1 1 1 0 n 0 0 0 0 0

2

1 1 1 0 0 0 0 0 n 0

M a c h i n e s

3 4 5

1 1 1 0 0 0 0 0 0 0

0 0 0 1

1 1 0 0 0 0

0 0 0 1 1 1 0 0 0 0

6

0 0 0 0 n 0

1 1 1 1

7

0 0 0 0 n 0

1 1 1 1

8

0 0 0 0 n 0 1 1 1 1

(a) Type I

H = 1

L = 0

Parts

1 2 8 5 4 9 3 6 7

10

1 1 1 0 0 0 0 0 0 0

0

3 1 0 1 0 0 0 0 0 0

0, _

Ms

4 1 0 1 0 0 0 0 0 0 0

Lchine

9

0 0 0 1 1 1 0 0 0 0

IS

5 0 0 0 1 1 1 0 0 0 0

6 0 0

Q 0 0 0 1 1 0 1

7 0 0

Q-J 0 0 0 0 1 1 1

8 0 0

0 ,J 0 0 0

1 1 0 0

(b) Type II

0<H<1

L = 0

Figure 3.2. Multiple Population Problem

Page 81: Annrovp - TDL

69

In the Universal Population problem, all parts are

similar with respect to all characteristics. Hence, the

values of H and L are equal to one (H = 1, L * 1) for all

possible groupings. An example of this type of problem is

presented in Figure 3.3.

A Natural Population problem is one in which, for

all possible groupings, at least one characteristic (attri­

butes) of parts will appear in more than one group. Two

types may be identified. In Type I Natural, H « 1 and

0<L< 1. This is a situation where an optimal solution

exists such that all parts in a group are similar in all

characteristics with some characteristics occurring in more

than one group. Type II Natural is different from Type I

in that 0 < H < 1 , 0 < L < 1 . Example of the Natural Population

data is shown in Figure 3.4.

In the Null-Relation problem, no two parts are simi­

lar in any respect. In terms of group homogeneity and link

between groups, H = 0 and L = 0 for all possible groupings.

Figure 3.5 is an example of the Null-Relation problem.

In summary, GT problem data may be characterized as

follows:

1. MULTIPLE TYPE I POPULATION

Optimal grouping exist such that H = 1 and L = 0

Page 82: Annrovp - TDL

fe

li K RHI '

^^^mtts-I'SMSfH

Afc

1

2

3

4

5

6

«

• *

Maehinea

1

1

1

1

1

1

1

2

1

1

1

1

1

1

3

1

1

1

1

1

1

4

1

1

1

1

1

1

Figure 3 . 3 . Universal Population Problem: H = 1, L - 1

70

P a r t s

1

2

3

7

5

6

4

8

1

1

0

1

1

0

0

0

0

2

1

1

0

0

1

1

0

0

Mac

3

1

0

1

0

0

1

1

1

h i n

4

0

0

1

0

1

1

1

1

es

5

0

0

1

1

0

0

0

1

6

1

1

1

0

0

0

0

0

g

g.

s

Figure 3.4. Natural Popula­tion Problem: 0<H<1, 0<L<1

P a r t s

1

2

3

4

1

1

0

0

0

2

1

0

0

0

Ma<

0

1

0

0

ihiE

4

0

0

1

0

les

5

0

1

0

0

6

0

0

0

1

7

0

0

0

1

8

0

0

0

1

Figtire 3.5. Null-Relation Population Problem: H = 0, L = 0

; 4 V , ,

Page 83: Annrovp - TDL

71

2. MULTIPLE TYPE II POPULATION DATA

Optimal grouping exist such that H < 1 and L = 0

3. UNIVERSAL POPULATION

For all possible grouping H « l , L = l

4. TYPE I NATURAL POPULATION DATA

Optimal grouping exist such that H « 1 and 0 < L < 1

5 TYPE II NATURAL POPULATION DATA

Optimal solution exist such that 0 < H < 1 and 0 < L < 1

6. NULL-RELATION POPULATION DATA

For all possible grouping H = 0 and L = 0.

Multiple population problems are likely to occur in

large batchshops that manufacture several products. In

many cases, parts that are given different names because

of the ftinctions they perform may, indeed, be identical with

respect to their characteristics. In one case study [22],

for instance, many bushing-like components of the same size

were given such names as pulley spacer, packer, relay tinit,

spacing sleeve, etc. The Type I Multiple may be ideal for

GT production in either the PCA or PFA approaches. Since

it is characterized by H = 1, this is a necessary condition

for the reduction of sequence-dependent set-up time to zero

[see Equation (2.18)]. In general, if a grouping solution

Page 84: Annrovp - TDL

72

is such that H = 1, L » 0 and the average niimber of parts

per group is fairly large, then it may be possible to set­

up a mass production system for each group.

Whether the Type JI Multiple is suitable for GT sys­

tems or not may depend on the value of H. Since L = 0, in

the PFA, the number of additional machines will be zero.

However, the arrangement of machines and tooling to reduce

set-up time, as claimed by PFA proponents [6, 7], may de­

pend on how many parts in a group use the same set of ma­

chines and share the same machine set-up related character­

istics . A high value of H indicates that parts in a group

use the same set of machines. The converse, for a low

value of H, is also true. Hence, in order that Type II

Multiple be feasible for the PFA approaches, H has to be

of high value. The same argtiment may hold true for the

PGA.

The Universal Population Data may not describe a

batchshop problem. No batchshop problem may have all parts

identical. The condition of all parts being identical de­

scribes a mass production system. Hence, if an optimal

grouping is such that H = 1 and L == 1, then the situation

may indicate a mass production system.

Notice, in the above discussion, that the relative

values of H and L are important in a preliminary determin­

ation of how good GT systems may be. Thus, H = 1 and L == 0

Page 85: Annrovp - TDL

73

is ideal as in the Type I Multiple problem. But H « 1 and

L = 1 is an indication that GT may not be a good alterna­

tive. Inference can be drawn that H = 0 and L = 0, as in

the Null-Relation jproblem, is unsuitable for GT production.

The same argument can be extended to the Natural Population

case.

In the Type I Natural, which is characterized by

H = 1, as L approaches zero, the closer it approaches

the ideal situation (the Type I Multiple). On the other

hand, as L approaches 1, the Type I Natural becomes the

Universal problem which may not be suitable for GT. In

general, the smaller the value of L and the larger the

value of H, the better a batchshop problem may be for GT

production.

3.4 Optimality Conditions for the Grouping Problem

The above discussions suggest that, for some problems,

optimality conditions can be established. For instance, in

the Multiple Population problem, sufficient and necessary

conditions for an optimal solution in the PFA is L - 0 or

A = 0 (A is ntimber of additional machines). This is obvious

since, by definition, the lower bound of L is zero. Also,

if L « 0, then no one attribute occurs in more than one

group. In the PFA, machines are the attributes. Hence,

A = 0 corresponds to a solution with L == 0. Conversely, if

iMMjjSk^^

Page 86: Annrovp - TDL

74

L > 0, then, at least, one machine occurs in more than

one group. In a case where the ntimber of each machine

type is one in the conventional problem, then A will be

greater than zero.

Similarly, H = 1 is the upper bound in the PCA.

Since the grouping objective is the maximization of H and

the Type I Multiple Population is characterized by H = 1,

sufficient and necessary conditions for optimal solution

is H = 1. These optimality conditions will now be for­

mally stated without further proofs.

Condition 3.1

In the Multiple Population problem, a suf­

ficient and necessary condition for an optimal

solution in the PFA is L = 0,

Condition 3.2

In any grouping problem presented in the

PFA format, if L = 0 then A, the number of addi­

tional machines, will also be zero. On the other

hand, if L > 0, then A is also greater than zero

provided the number of each machine type is one.

Condition 3.3

In the Type I Multiple Population problem,

sufficient and necessary conditions for optimal

solution in the PCA is H = 1.

Page 87: Annrovp - TDL

75

The Type I Natural Population problem is also characterized

by H = 1. Hence, Condition 3.3 also applies to the Type I

Natural Population problem.

An algorithm that can maximize H or minimize L is

presented in the next chapter.

Page 88: Annrovp - TDL

CHAPTER IV

GROUPING ALGORITHM

4.1 Introduction

The grouping problem defined in Section 1.2 of

Chapter I may be an explosive combinatorial problem.

This may be judged from the total ntimber of all possible

solutions, T(n,N), for n parts and N groups. T(n,N) is

given by the following expression [3].

N T(n,N) = JP [Z (-1)^-^ (h (i)^] (4.1)

I i=0 ^

For even a small problem of n = 25 and N = 10, T(25, 10) =

1,203,163,392,175,387,500. Thus, traditional enumerative

techniques may be intractible for practical problems. For

instance, a dynamic programming approach to a similar prob­

lem in plant taxonomy by Jensen [26] was able to solve only

very small problems. Some authors in Cluster Analysis [21,

44] suggest that no efficient techniques may exist for prob­

lems of this type. In this chapter an iterative procedure,

analogous to the classical Gradient Optimization approach,

will be presented.

Iterative techniques have been reported for the Sta­

tistical [44] and Non-Hierarchical [3] problem in Cluster

Analysis. Of particular interest to this study is one by

76

Page 89: Annrovp - TDL

I

Rubin [ 41]. Although Rubin's method was intended for the

Hierarchical problem, the grouping objective function was

formulated in terms of similarity coefficients. Most of

the reported techniques have features for specifying ini­

tial solutions, a search technique and a stopping rule

[3, 41]. Rubin reported that several local optima exist

in the solution space of the grouping problem and that

the effectiveness of the iterative approach may depend on

the initial solution. Thus, in the algorithm presented

here, a heuristic designed to obtain "good" initial solu­

tions is used. The gradient method of solution search is

explored and a feature for checking the quality of local

optima presented. The heuristic will be discussed first.

4.2 A Heuristic for Initial SolutioiT

This heuristic works in two stages. In the first

stage, N indicator groups are chosen. An indicator group

is one with only one representative part. In the second

stage the remaining parts are assigned to the indicator

groups. Both stages make use of a concept called Prefer­

ence Index as a decision rule. This concept, which is

unique to this study, will now be discussed.

For now, a Preference Index will be defined for the

ftinction H. A Preference Index, Y .:, may be defined as the

net contribution which part i will make to H when part i is

77

Page 90: Annrovp - TDL

78

assigned to group j. To illustrate, consider a grouping

problem of five parts (n * 5) and two groups (N « 2).

Excluding similarity coefficients of the type R.., each

part has four similarity coefficients. Asstime that, in

the process of group formation, the set of parts in group

one (gj ) is (1, 3} and that in group two (g2) is {4,5}

while part two is a candidate part. The current values

of H^, H2 and H are as follows.

H,=!li:^.u = ! « 1 ^ and

H, + H« RlQ + ROT RAC: + Rr/ H = -i -^ = (-i 31) ( 45 54)

Notice that the similarity coefficients of part two, R., ,

^23' 24 ^^ 25 ^ ® currently not involved in the computa­

tion of H. The assignment of part two to g changes H^

jj ^ Rl3 + R31 + R2I " 12 " 23 " 32)

^ 6

while H2 remains the same. That is, R2^ and R2^ do not

contribute to H, the average within group similarity.

This means that the assignment of part two to g contri­

butes the average of R2] and R23 (R2] = R] 2 ^^ 23 ~ 32^

to H while the average of R24 and R25 are lost. Similarly,

the assignment of part two to g2 contributes the average

Page 91: Annrovp - TDL

79

of R24 and R25 to H while losing the average of R2, and R^^.

In general, the assignment of a part to a group results in

a contribution and a loss of similarity coefficients to the

average within group similarity, H. It is the net contri­

bution resulting from the assignment of a part to a group

that is referred to as Preference Index. Using symbols.

Preference Index of part i to group j , y.., is given by

n. n-n. -, J J-l

k«l il=l 'ij n. ^"^1-1 \^-^^

where keg. and il^g.; n. is the ntimber of parts in group j

excluding part i.

Similarly, Preference Index can be defined between

two parts. Denote Preference Index between parts i and a

as i/;. . = Then,

n-2 ^ _j_ n-2 j _ ^ k^l ^^ kSl ° ^ , i+a+k. (4.3)

' la la 2 (n-2)

^. is the net contribution parts i and a will make if la

assigned to the same group.

For N groups, N Preference Indicies may be computed

for each part. The assignment of parts to groups on the

basis of maximum y^. appear intuitively appealing in a

procedure to maximize H. This is apparent in some of the

ik^^S&ii

Page 92: Annrovp - TDL

80

properties inherent in the definition of y.. or ij;., .

For instance, if Yj *<0, then i may be assigned to a group

other than group j. It is an indication of a negative

contribution. If y. . = 0 for all j , then part i may be

assigned to any group. For Y4 4>0, it is a positive con-

tribution. The same properties may hold true for ^"^,

the Preference Index between two parts.

To group parts using the Preference Index concept,

two initial indicator groups ir and 1^2 ^® formed by as­

signing part i to -n-. and k to 1^2 such that ^jy., the simi­

larity coefficient, is minimtim. (ir. denotes indicator

group i.) If more indicator groups are desired a compo­

site Preference Index, 0., is computed for all candidate

parts:

e. = Z i|;., (4.4) ^ keir ^^

where TT is the current set of parts in indicator groups

and i is a candidate part. The part with mine ^ is assigned

to the next indicator group. The process is repeated until

N indicator groups are formed. The computation of y ^ ,

II;., and 9. are illustrated in Example 4.1. ^ik 1.

The next stage involves the assignment of the remain­

ing candidate parts to the formed indicator groups in an

iterative manner. It is done on the basis of maximum y ^ .

Page 93: Annrovp - TDL

• »

Input PCM Matrix, n; Compute R,

For Min R Assign i to IT. & j to ir.

Update _ F, Kount, N

YES

Compute. 9.

I For Min9

Assign i to ''' + i

Compute y .,

I For Maxy^j^,

Assign i to g.

I Update Kount, F

Compute V(G )

81

Figure 4.1. Flowchart of the Preference Index Hueristic.

Page 94: Annrovp - TDL

82

Because the Preference Index concept is central to this

heuristic it will be called the Preference Index Heuristic.

This heuristic is summarized in the flowchart of Figure 4.1.

A detailed outline is presented below. The following nota­

tion will be used:

G^ = Completed grouping or partition

F « Current set of candidate parts

TT. « Indicator group j

Kount - Current ntimber of parts in groups

N = Current ntimber of indicator groups

N - Specified number of groups

n = Total number of parts

g. = Group k

V(G ) = Grouping objective function.

(i) Outline of Preference Index Heuristic.

SEGMENT I Formation of indicator groups

STEP 0 Compute similarity indices

STEP 1 : Initialize F, N, Kount

Note: (All ties are broken arbitrarily)

STEP 2 : Select min R., and assign i to TT

and k to 1^2

STEP 3 : Update F, R and Kount

STEP 4 : If N = N, go to Step 6; else, go to 5

g^j^jjg^^-j

Page 95: Annrovp - TDL

STEP 5

SEGMENT II:

STEP 6

STEP 7

STEP 8

83

Compute 6^ for all candidate parts and

indicator groups. Assign i to 7rjj_^ such

that 9jL is minimum. Go to Step 3

Assignment of remaining parts to group

Compute y^. using Equation (4.2) for

all i (candidate parts) and j (indi­

cator groups). Select max y.. and

assing L to group j

Update F, Kount

If Kotmt = n, stop; else, go to Step 6.

An illustrative example will now be presented.

Example 4.1.

The problem discussed in Example 3.1 will be

used. The similarity coefficients of Figure 3.1c are

repeated here for convenience.

^j =

1

2

3

4

5

.86

1.0

0.0

0.0

86

86

14

14

1.0

.86

0.0

0.0

0,0

.14

0.0

0.0

.14

0.0

.86

.86

Page 96: Annrovp - TDL

84

The problem is that of assigning the five parts of

Figure 3.1c into three groups in order to maximize H; the

Preference Index Heuristic is to be used.

SOLUTION PROCEDURE:

STEP 0 : Similarity indices were computed as

shown in Chapter III.

STEP 1 : Initialize parameters.

Current set of candidate parts,

F « {1. 2, 3, 4, 5}

Ntimber of indicator groups, N = 0

Current ntimber of parts in groups,

Kount = 0

Ntimber of parts, n = 5

Ntimber of groups, N = 3

STEP 2 : Formation of first two indicator groups

Select min R^. and assign i to first

indicator group TT and j to second in­

dicator group 1^2

Min R^j = {R1L4, Ri5> R34' 35^ "

Decision: Choose R^^ arbitrarily and

assign 1 to TT and 4 to 712-

TT = {1}, TT2 = {4}

STEP 3 ' Update parameters.

Candidate parts, F = {2, 3, 5}

N = 2, Kount = 2

Page 97: Annrovp - TDL

85

STEP 4 : Are there enough indicator groups?

(N+N) NO, go to Step 5

STEP 5 : Form next indicator groups using the

preference index, ^. , for parts i

and a [see Equation (4.3)].

For candidate part i = 2

^ _ (R23+R24+^25^ " ^ 13" 14* 15 21 " 21 2

35 _ .86+.14+.14+1.0+0.0+0,0

= .50

Similarly, i|;24 = -.31

®2 " 21 " 24 " '^^ ~ - - " '^^

For c a n d i d a t e p a r t 3

11 3 = . 7 1 , i| 34 = - . 4 8

and 63 = .71 - .48 = .23

For candidate part 5

^51 " ''^^' ^54 " ' "

6. = .33

Minimum {62, 63, 65} = min {.19, .23, .33} = 62 = .19

Decision: Assign part 2 to next indicator group.

.3 = {2}

Page 98: Annrovp - TDL

86

STEP 3

STEP 4

STEP 6

(Repeated): F = {3, 5} , N « 3, Kount = 3

(Repeated): Are there enough indicator

groups? (N = N); YES, go to Step 6

Assign remaining parts to groups using

preference index between candidate part

i and group j, Y^j

For candidate part 3

31 ^31 R32 - R34 + R35

j=n:

1.0 - -86 + 0.0 + 0.0^ -^^29

= .71

In the same manner.

Y32 == --62, y^, = .53 33

For candidate part 5

^51 " -•^^' 52 " - ^ ^^-^ ' --^^ 53

Max (y3^, y32, Y33, Y5i» Y52» Y54) -

Max (.71,

Decision:

STEP 7

STEP 8

-.62, .53, -.33, .81, -.15) = y52 = .81

Assign part 5 to group 2

Tr2 = {4, 5}

Update F and Kount

F = {5}, Kount = 4

Any more candidate parts? YES, there

are because Kount<n. Repeat Step 6

through 8

Page 99: Annrovp - TDL

87

STEP 6 : (Repeated):

^31 = - ^ 32 = --62 y33 = .53

Max (y3^, y32, y33) = Max (.71, -.62, .

=^31 = -71

Decision: Assign part 3 to group 1,

•^-^ = (1, 3}

STEP 7 : (Repeated) :

F = (0), Kount = 5

STEP 8 : (Repeated): Any more candidate parts?

NO .

Compute H and stop,

TT = (1, 3 ) , 772= ^^' ^)>^3 = (2)

GQ = {(1, 3), (4,5), (2)}

53)

H 1

«1

H3

H

V(G^)

=

=

=

=

=

' '13

^45

0.0

^ ^31 2

+ R54 2

H^ + H2 +

3

H = .62

=

=

H3

1+0

.86

+ 2

1

1.0

.86 3

= 1

=

.0

,62

Page 100: Annrovp - TDL

88

4.3 Theoretical Background of Grouping Algorithm

The principle explored here is similar to the one

used in solving classical optimization of nonlinear ftinc­

tions in Operation Research. Of particular interest are

the gradient methods which have been extensively discussed

by Gottfried and Weisman [19]. A brief description is pre­

sented here to clarify the discussion.

The fundamental equation for generating levels of

state variables in the gradient methods is given by the

following expression:

where

f (Z) = Ftinction to be optimized

Z - State variable

Z .1 = Current level of Z m+1

Z = Previous level of Z m

m = Iteration ntimber

p = Step size

and, Af(Z ) = Gradient of f(Z) which may be defined as

follows:

Af(V =f(2nrfl> - f^V- ^''•^^

Page 101: Annrovp - TDL

89

This is a difference equation which may enable the compu­

tation of Af(Z^) for even complex functions, f(Z). Prac­

tical methods of estimating ^f(Z ) using Equation (4.6)

exist [19].

To optimize f(Z), one starts with an initial solu­

tion, [ZQ,f(ZQ)], compute ^fiZ^) and generate the next

level of state variable, Z^, using Equation (4.5). In

essence, this is a search at a known neighborhood in the

direction of maximum gradient, +Af (Z) , or minimum gradient,

-Af(Z) as desired. The optimum solution corresponds to the

last level of state variable where Af(z) vanishes. The

grouping algorithm will follow the same logic. The con­

cepts of state variable, step, gradient^ etc., as applied

to our combinatorial problem will be defined in the course

of the discussion that follows.

A partition, G, was earlier defined as the set of

groups {g^; g2;.-.g^;»»»gN^ °^ combination of parts. Thus,

in the grouping problem G is the State Variable. A parti­

cular combination of parts is the level of the State Vari­

able. To illustrate, consider the solution to Example 4.1:

Go ' Sl' ^2' S3> ' ^1' 3' 2; 4, 5}

is one level. To move this level, G^, to another level,

say G , a part may be transferred from one group to another

Page 102: Annrovp - TDL

90

G^ = (1; 2; 3, 4, 5},

or two groups interchange respective parts:

G2 = {1, 4; 2; 3, 5}.

This process of systematically changing a state will be

called a Step Operation; a formal definition of Step

Operation follows.

Definition 4.1

With respect to a known level, G , a Step

Operation is the movement of a set of parts from a donor

group, g,, to a receiving group, g . There are two types:

(i) Transfer Step involves the transfer of a set

of parts, all from one group, to another group. The ntimber

of parts in the set may be restricted to one or more. The

transfer step is illustrated in Figure 4.2(a).

(ii) Interchange Step occurs when two groups mutually

interchange an equal ntimber of parts (does not include in­

terchanging all the parts from each group). An example is

presented in Figure 4.2(b).

Notice the analogy in creating the level of a state

in the classical gradient methods and the one just described.

Even though the step operations may be different, both can

be regarded as a process of searching in the neighborhood

of a known level. This introduces the concept of neighbor­

hood which will now be given a concise definition.

Page 103: Annrovp - TDL

91

G.: Partition before •'" transfer step

Partition after transfer step

(a) Transfer Step

Partition before interchange step

Partition after interchange step

(b) l^asfer Step

4.2. Illustration of step operation

Page 104: Annrovp - TDL

92

Definition 4.2

The Neighborhood of a Known Level or Parti-

tion is the set of all partitions that can be created from

the known level by means of a step operation. Since there

are two types of step operation, a level has two types of

neighborhoods. Thus, if (|) denotes the neighborhood of a

partition, then (^^ is the neighborhood created by the trans­

fer step and <()j, that by the interchange step. Observe

from the illustrative levels above, G, and G2, that the set

of partitions in (|>,j, and <^.^ may be different. The signficance

of this will be explored later in the chapter.

The ntimber of levels in the neighborhood of a known

partition depends on the ntimber of parts moved per step

operation. For instance, consider a step operation invol­

ving the transfer of one part per step. There are N-1

possible ways of transferring each part from its original

group. For n parts, the total ntimber of transfers, K, is

given by

K = n(N-l).

Thus, if X((J)„) represents the ntimber of levels in the

neighborhood of a partition, then

X ( 0 = K = n(N-l) (4.8)

Page 105: Annrovp - TDL

93

A similar derivation is possible for the number in (|> neigh­

borhood. Equation (4.8) shows that the neighborhood of a

partition has a fixed number of other partitions. Hence,

like in the classical gradient methods, the search for a

better level of State Variable can be restricted to any of

the neighborhoods per iteration.

The term "better level" introduces a sense of "dif­

ference" or gradient of a ftinction. Let V(G ) be the func­

tion to be optimized (in our grouping problem) evaluated at

the level of State Variable, G [V(G) may be H, L or A de­

fined in Chapter II] . The gradient of V(G^) will now be

defined.

Definition 4.3

Let A J (j) be the measure of change of the

function V(G ) by transferring part j from group d to group

r of the partition, G^. Thus, if G^^ is the partition or

level of State Variable obtained after the transfer step,

then the A, (j) is the component gradient of V(G^) in the

<t)m neighborhood.

^^^(i) =V(G^i) - V(G^). (4.9)

Equation (4.9) is a difference equation. Rearranging (4.9),

Page 106: Annrovp - TDL

94

Since there are K [see Equation (4.8)] levels in the neigh-

borhood of G , there will be K component gradients. Let

the set of component gradients of G^ be represented by A° .

Hence,

A""- {Adr(J>>

s the Gradient Vector of V(G^). Rewriting Equation (4.10)

in terms of maxA™:

^^Vl> = V(Gm> +=^A'" (4.11)

for the maximization objective, or

m ^(^m+l> = V ^ V - ° A" ( -12)

for the minimization of V(G) . Combining Equations (4.11)

and (4.12),

V(G^^) = V(G^) tmaxA"'. (4.13)

The similarity between Equations (4.9) and (4.6) should be

noted. Recall that Equation (4.6) is the difference equa­

tion for computing the gradient of nonlinear functions in

the classical gradient methods. Of greater importance is

Equation (4.13). It provides a basis for a grouping pro­

cedure to search in the direction of maximtim gradient,

Page 107: Annrovp - TDL

95

+maxA , or minimum gradient, -maxA°^. Hence, combining

Equation (4.13) with the step operation, a search model

similar to the one described for the classical gradient

techniques may be fabricated.

One characteristic of gradient techniques is that

only local optimum solutions for nonquadratic ftinctions

may be guaranteed. A definition of a local optimum solu­

tion for our combinatorial problem will, therefore, be

given. A formal definition of the grouping algorithm and

a proof of optimality will then be presented in the sec­

tion that follows.

Definition 4.4

A local optimum solution occurs in the

neighborhood of a level of State Variable, say G^, when

maxA <0.

4.3.1 Grouping Procedure

Like the classical gradient methods, the

grouping procedure starts with a known level of State Vari­

able. G , and the corresponding function value, V(GQ). A o

search for better solution in the neighborhood of G^ is

performed. If none better than G^, then stop. Otherwise,

replace G^ with the best level and update V(G), With this

in mind, the grouping procedure will now be given a formal

definition.

.. •BBtBi.n.tik-..;' ,.

Page 108: Annrovp - TDL

96

Definition 4.5; Grouping Procedure

Let X be the set of all solution points

where a solution point is the value of the function V(G ) m

evaluated at the m level of the State Variable, G. The

grouping objective is the maximization of V(G) . Let X

be a subset of X where the points in X form a sequence

XQ, X,, X2,...x , x , ... Let P be the mapping that it-

eratively generates the subset X ; P is defined as follows: IT

(i) X initial point in X is specified

(ii) x^ means V(G^)

(ii) The sequence x^, xm_j,j eX implies that

X . 1 = X + maxA for all m. m+1 m

(iii) maxA >0

(iv) If maxA™£0, stop; else, continue to generate

the sequence in X .

Thus, P:X->X„ is a point to set mapping. P

To show that P will terminate after a finite number of it­

erations, it is necessary to demonstrate that X is a finite

set. Earlier in this chapter, the number of all possible

levels of State Variables in the grouping problem was ex­

pressed as follows:

T(n.N)=^,[J^(-l)^-^(?)(i)"3

Page 109: Annrovp - TDL

97

where

T(n,N) = number of levels

n « number of parts

N ~ number of groups.

From the above it is clear that T(n,N) is a finite set

when n and N are finite. Since there is a one-to-one

correspondence between a level of State Variable, G, and

function value V(G) , then the ntimber of parts in X (set

of solution values) equals T(n,N). Hence X is a finite

set when n and N are finite. This is stated as a lemma

with no further proof.

LEMMA 4.1

X, the set of all solution values of the

problem of grouping n finite parts into N mutually

exclusive and nonempty subsets, is a finite set.

In order to show that the procedure, P,

will converge at a local optimum in a finite

ntimber of steps it is necessary to define some

possible relation between partitions. (Partition

is interchangeably used with level of State Vari­

able. It is a physical representation of the set

of groups.) Two partitions may be identical,

equivalent or distinct. For a given problem, two

partitions G. and G. are identical if (1) both

have the same combination of parts and (2) V(Gj ) =

Page 110: Annrovp - TDL

98

V(G-); Equivalent Partitions if (1) G. and G. J !• J

have different conibination of parts but (2)

V(G^) = V(G.). G^ and G. are Distinct Parti­

tions when (1) both are different combinations

of parts and (2) V(G^)+V(G).

THEOREM 4.1

Consider the grouping problem of n parts

and N groups where n and N are finite. An algorithm

using the procedure P (Definition 4.5) to solve this

problem will (1) terminate after a finite ntimber

of steps at (2) a local optimtim.

PROOF

In the mapping P

X. .T = X. + maxA L+1 1

Rearrangement of this equation results in

'i+1

Thus, Equation (4.14) shows that

x..n - X. « maxA^>0 (4.14)

X. ,1 >X. , V i . 'i+l"'!

The set of points in X is, therefore, strictly

an ascending sequence

^o^V^2---^Vi+l^ (4.15)

Page 111: Annrovp - TDL

99

To prove the first part of the theorem, we assert

that the partitions corresponding to x.ex , G , G-, , G^ 1 p o 1 Z

...G^, G _|_ ... are all distinct partitions. Sup­

pose by contradiction, that they are not. Then there

exists, at least, two identical or equivalent parti­

tions G^, G. generated by the procedure P, By defi­

nition of identical or equivalent partitions.

^i "^j-

But this is not in conformity with the strictly as­

cending condition established in Equation (4,15).

Thus, P can only generate distinct partitions. With

only distinct partitions, no cycling can take place.

Hence, the maximtim ntimber of points in X cannot be P

greater than these in X (X is a subset of X). By ?

Lemma 1, X is a finite set, X must, therefore, be

a finite set as required by the theorem. Conse­

quently, ? must terminate after a finite ntimber of iterations.

For the second part of the- theorem, let X^sX^

be the terminal point generated by P. The maxA <0,

is the required stopping rule. But maxA^<0 is, by

definition, a local optimum. Hence, P must ter­

minate at a local optimum, completing the proof of

Theorem 4,1.

Page 112: Annrovp - TDL

100

From Definition 4.5 and Theorem 4.1 it appears that

P is a versatile procedure for solving the grouping problem.

For Instance, it is clear that replacing the equation

V(G»ri.i) - V(G^) + maxA'»

with

m V<G^l> = V(G^) - maxA

P can be used to minimize V(G) . Since the only restriction

in the definition of a criterion of optimization is that

V(G) be defined in terms of characteristics of parts, P

may therefore be used to optimize H, L or A. Also, in the

definition of P, no restriction was placed on the type of

step operation for generating levels of State Variable.

Recall that the transfer step generates partitions in <j),j,

neighborhood and the interchange in (|)j. Thus, P can be used

to search in either neighborhoods. The iterative procedures

reported in other disciplines of Cluster Analysis use only

the transfer step [3, 5, 41]. The interchange step is

unique to this study. The interchange step was introduced

because of the following preliminary observation. The

"local optimtim" in Theorem 4.1 turned out to be the global

solution in some cases while in others it is not. On

critical examination of those that are not, one common

characteristic was observed: two or more groups require

Page 113: Annrovp - TDL

101

the simultaneous interchange of some of the respective

members (parts). Using only the transfer step in such

a situation, P converges to a local optimum different

from the global.

To illustrate, consider a solution to a hypothe­

tical problem of eight parts presented in form of simi­

larity coefficients below; the objective is to maximize

H. The ntimber of groups are three. Asstime that while

using the transfer step in

Parts

1

2

3

4

5

6

7

8

1

-

.67

.20

0

.67

0

.25

.25

2

.67

-

.17

.2

.5

0

.5

.5

3

.2

.17

-

.4

.4

.5

.4

.17

4

0

.2

.4

-

0

.25

.5

.5

Parts 5

.67

.5

.4

0

-

.25

.17

.2

6

0

0

.5

.25

.25

-

0

.25

7

.25

.5

.4

.5

.17

0

-

.2

8

.25

.5

.17

.5

.2

.25

.2

-

.th . the procedure P the solution obtained at the m itera­

tion is G^ = (g^; g2; 83)-

gi = (1, 5); g2 = (2, 7); g3 =(3, 4, 6, 8)

and H = V(G ) = .504. m

Page 114: Annrovp - TDL

102

A further search in the neighborhood of G results in m

maxA™ - -..044<0. Thus, P terminates and (G , V(G )) is

the local optimum. Now, let the global solution be

(G*, V(G*)) where

G* = (g*i; g*2; g*3)

It turned out that g*j = (1, 5); g*2 - (3, 6) « (2, 4, 7,

8) and V(G*) = .523. The difference between G and G* m

should be noticed. Groups 2 and 3 interchange the respec­

tive sets of parts: (2, 7) and (3, 6) in order to arrive

at G*. The transfer step aloiie is inadequate here because,

in the context of the problem, 2 is highly similar to 7 and

3 to 6. Also, the set of parts (2, 7) are more similar to

other members of go without the set (3, 6). Hence, in order

for the "local optimtim" of P to coincide with the global

solution most of the time, the search may be done alter­

nately in <t>m and <t>-r neighborhoods.

From the ongoing discussion, it is clear that the

efficiency of P may depend partly on a good initial solu­

tion. Fewer iterations will result from using a good

starting solution. It was for this reason that the Pre­

ference Index Heuristic was developed. Another factor

that may affect efficiency is the number of terms in the

evaluation of H, L and A. For N groups, H and L have N

c^Jb^^^Ll^Li •

Page 115: Annrovp - TDL

103

terms, respectively [see Equations (3.1) and (3.3) in

Chapter III]. In the following section it is shown that

only two terms need computation per function evaluation

using P to optimize H or L.

4.3.2 Evaluation of H and L

Let A^^r^. V be the component gradient of

partition G resulting from transferring the set of parts

(ij...) from group d to r. From Equation (4.9), if G _-,

is the resulting partition, then

V(ij...) =V(«nri-l) -^(«) m

In terms of group homogeneity, H

A /.. N = H - H T , (4.16) dr(ij...) new old

^^^^^ \ew = (^mfl) ^^ «old = ^ V • ^°^'

H = l [H +H2... +H^ + H ... + % ] ( -17)

where H, and H are the similarity of the donor and re-d r

ceiving groups in the transfer step. Substituting H of

Equation (4.17) in Equation (4.16),

Page 116: Annrovp - TDL

104

new ^drCij...) ' h [% + H 2 . . . +Hjj+ (H^ + H^]

- I [H^ + H2. . . + Hj, + (H^ + Hp] ^^ . (4.18)

In the creat ion of G^ ^ from G , using the transfer s tep ,

the terms H , H2,. . . H^ were not af fected. Only H, and H

change. Hence,

[H, + H + . . . H„] = [H, + Ho + H i 1 2 N-* new ^ 1 "2 * * * n-'old*

(4.19)

By substituting (H^ + H2 ... + H^^)^^^ of Equation (4.19) in

Equation (4.18),

H, + H« . . . + H,j (H ,+H ) A a rC—±——± -il nlH 4. d r^newi ^dr(ij...) LC ^ )old + ^ ]

^(H^+H2 ... H^)^^^ + a V » ^ ^ i . ^l^d^Vnew '• N N ^ N

(«d Void N

_ ^Vnew " ^Vold + ^\^new " ^\^old A _ yji n e w Vi. Vd>JLVJ. I i. new i. yjA.\j. fi on\

dr(ij...) N N ''• "

Put

and

Y . ^Vnew - ^Vold (4.20a) d N

^ _ ^Vnew ' ^Vold (4.20b) h: N

Page 117: Annrovp - TDL

105

then

^dr(ij...) = Y ^ + \ (4.21)

Equation (4.21) defines the component gradient of transfer­

ring a set of parts between two groups, d and r, in terms

of the change in the group similarity of the respective

groups. A similar argtiment holds for the interchange

step. Thus,

^dr(ij...; kl...) = d + \ (*-22)

where ^i;i-(ij . . . • kl ) ^^ ^^^ gradient due to the inter­

change of the set of parts (ij . . . } and (kl..,} by groups

d and r, respectively. Equations (4.21) and (4.22) also

hold for the minimization of link between groups, L,

since it is defined in terms of individual groups. For

large ntimbers , these equations can be used to reduce com­

putational requirement per function evaluation in the

optimization of H and L. This is not the case with the

minimization of A since it is not defined as a stim of

terms that depend on individual groups (see Section 2.4).

In the next section the details of an algorithm based

on the procedure P are presented.

Page 118: Annrovp - TDL

106

'•'•' ?echniquf '"°"^'"^ Algorithm: The Gradient

Because of the close analogy between the pro­

cedure P, defined in Definition 4.5, and the classical gra­

dient methods, the grouping algorithm will be called the

Gradient Technique. The basis of the Gradient Technique

is P, A complete solution can be accomplished in three

stages. In the first stage, initial solution is specified

using the Preference Index Heuristic. In the second, the

transfer step is used to obtain a local optimum while, in

the third stage, the interchange step is used to check the

quality of the solution in the second stage. Improvement

is made if the solution of the second stage is of poor

quality. If desired, stages two and three are carried

through alternately until both converge to the same solu­

tion. Notice that, if preferred, stage one or stages one

and two can be used independently.

The outline of the Gradient Technique has been pre­

sented in the flowchart of Figure 4.3. The details of each

labeled block of the flowchart are given in Table 4.1.

TABLE 4.1: FLOWCHART DETAIL

BLOCK 0: Specify initial solution (see detail of

Preference Index Heuristic in 4.2)

BLOCK A: Computation of Gradient Vector

1. Compute A^^(j...) for all (j...), d, r

using Equation (3.22)

Page 119: Annrovp - TDL

107

Specify i n i t i a l parti­t ion, GjL and V(G.)

Loop 1 NO

Compute Gradient Vec­tor A^; create parti­tions

J^ith Transfer Step

B YES,

Loop 2

Replace G. with

i iJpdate solution value

V(G^^^) = V(G^)+maxA^

Compute A^ using the Interchange Step to create partitions

Loop 1: Search in P neigh­borhood

Loop 2: Search in _ neigh­borhood

YES

Replace G. with

. t Update solution value

V(G^_^^) = V(G^)+maxA^

YES

Figure 4 . 3 . Flowchart of the Gradient Technique.

Page 120: Annrovp - TDL

"P " 1 1 : J

108

2. Set A^ = {A^^(j...)}

- S®^ d*r*(j*...) = '^^^^

BLOCK B: Optimality Test

Check if A^^^^(.^ ^ < 0

BLOCK C: Update Current Best Solution

1. Remove the set of parts (j*...) from

group d*

2. Assign (j*...) to group r*

(the resulting partition is G..^)

BLOCK D: Update Current Solution Value

V(G,^P =V(G,) +id*r*(j*...)

BLOCK E: Computation of Gradient Vector: Inter­

change step is used to create partition.

1. Compute A^^^j^^^ . ^^ ^ for all d, r

and set of parts {j, m. ..}eg^, {k, l...}eg^

2. A^ = { jr(jm. ..; kl...)}

3- Vr*(jlml...; k*l*...) = " ^ "

BLOCK F: Optimality Test

Check if ^dr*(3*m*...; k*l*...) -

BLOCK H: Update of current Best Partition

1. Remove the set of parts {j*m*...} from

group d* and assign to group r*

Page 121: Annrovp - TDL

109

2. Remove the set of parts {k*l*...} from

group r* and assign to group d*

BLOCK I: Update Solution Value

V(Gi^P - V(G ) + A^^^^^^^ . ^^^ ^

BLOCK T: Check if the search in both neighborhoods,

<l>m and < j, converge to the same solution.

The gradient algorithm will now be illtistrated with

an example. The same problem described in Figures 3.1(a),

3.1(b) and 3.1(c) used to illustrate the Preference Index

Heuristic will be solved. From Figure 3.1(c):

Parts 3 4

R-.eR =

Parts

1

2

3

4

5

1

-

.86

1.0

0

0

.86 1.0 0.0 0.0

.86 .14 .14

.86 - 0.0 0.0

.14 0.0 - .86

.14 0.0 .86

As in example 4.1, the objective is to maximize H,

MiUitliSfeii-j

Page 122: Annrovp - TDL

110

SOLUTION PROCEDURE:

Let the initial solution be G and the

value of H corresponding to G^ be designated as

V(GQ); the gradient of V(G ) is A°

STEP 0 : Specify initial solution, G . From o

the solution of Example 4.1, G = o

Ul, 3); (4, 5); 2)

V(GQ) = .62 (the Preference Index

solution)

STEP A : Search for a better solution in the

neighborhood of <J)„ (the transfer step

is used).

From Equation 4.21, the component gradient

of transferring part 1 from group 1 to

group 2

^12(1) " 1 " 2

.here Y, ^^^^^^" f^-^^^^ from (4.20a)

,,, Y^ . ^"2)new -<Vold f,,, (4.20b)

CH.,) =0.0 because only part 3 remains ^ I' new

in group 1 after the transfer

of part 1

(H,),,,, !ii;i3i = hi^i^ - 1.0

t.Vi;

Page 123: Annrovp - TDL

I l l

Y B 0.0 - 1.0 -J, ^1 ^J »-.33

rw - 14 " 41 " 15 " 51 "*• ^45 " 54 **2 new 5

_ 0.0 + 0.0 + 0.0 + 0.0 + .86 + .86

.29

(H2)old - ! ^ l l ^ = 81+-86 , S6

Y2 « '^^ I - ^ - -.19

A 2(l-) = --33 + (-.19) « -.52

Similarly, other component gradients are as follows:

A^2(3) = -.52, A 3 3_ = -.05, A^3^3^ = -.05

A2i(4) = -.50, A2i(5) = -.50, A23(4) - -.24,

^23(5) = --^^

^31(2) ' infeasible, A32(2) - infeasible

A° ={-.52, -.05, -.05, -.50, -.50, -.24,

-.24}

maxA° = -.05

Page 124: Annrovp - TDL

112

STEP B : Test for optimality of the solution

obtained in the <i>^ neighborhood.

Is maximum A° < 0?

YES, because maxA° = -.05 < 0

Decision: Solution is locally opti­

mal in the (|),p neighborhood, current

best solution is

GQ = {(1, 3); (4, 5); 2}, V(G^) = ,62

STEP E : Search for a better solution in the

neighborhood of ({) (the interchange

step is used) with G as the current

best solution.

From Equation (4,22), the component gradient re­

sulting from groups 1 and 2 interchanging parts

1 and 4, respectively,

A^2(l. 4) = Y^ + Y2

where Y and Y2 are as defined in Step A,

^ I' new 2 Z

^13 " 13 _ 1.0 + 1.0 _ ^Vold " ^ = 2 = -Q

Page 125: Annrovp - TDL

113

(Ho) - ^^5 ^ ^51 , 0.0 + 0.0 . 0 0 ^2''new Z 2 "•"

fH ^ - ^^5 "*" 54 _ .86 + .86 «. ^^2^old 2 2 ' -^^

V _ 0.0 - .86 _ oo

A3 2( » ^) * --^^ --29 » - . 62

S imi lar ly , other gradient components are:

A^2<1» 5) = - . 6 2 , A^2(3' ^> * - - ^ 2 '

Ai2(3. 5) = - . 6 2 , A^3(l, 2) = - .05

A^3(3, 2) = - . 0 5 , A23(4, 2) = - . 24

A23(5, 2) = - . 2 4

A'' = {-.62, -.62, -.62, -.62, -.05, -.24,

-.24}

maxA° = -.05

STEP F Test for optimality in the neighborhood

o f <j)-jp

maxA*'^ 0? YES. maxA° = -.05 < 0.0

Decision: Solution has converged to a

local optimum.

Current best solution remains G^

^o = {(1, 3); (4, 5); 2} , V(G^) = .62

Page 126: Annrovp - TDL

114 STEP T : Check if best solutions obtained in

(.p and 0j are the same.

Notice that the solutions obtained in Steps

B and F are the same. Thus, the algorithm

will terminate as indicated in the flow­

chart. .

FINAL SOLUTION:

GQ = {(1, 3); (4, 5); 2}

H = V(G^) = .62.

By totaling entimeration G has been verified as the

global partition. It should be noted that in this partti-

cular example, the Preference Index Heuristic also resulted

in the global optimal solution.

In the above example H is maximized. As earlier

stated, L and A can also be minimized by the Gradient Tech­

nique. The possible relation between H and L was demonstra­

ted in Chapter III and that between L and A (additional

machines) discussed in Chapter II. Since H and L may have

negative correlation, a starting solution for the maximiza­

tion of H may also be used for the minimization of L. Thus,

the Preference Index Heuristic may be used to specify star­

ting solution in the minimization of L or A.

Page 127: Annrovp - TDL

115

4.3.4 Computational Experience

The dependence of the efficiency of the Gra­

dient Technique on the starting solution and number of terms

involved in function evaluation was previously mentioned.

Another factor may be that of problem size. A large ntimber

of parts may adversely affect the efficiency of the algori­

thm because the number of component gradients computer per

iteration depends on the number of parts.

There is also a problem of effectiveness since P guar­

antees only a local optimtim solution. Although the combina­

tion of the transfer and interchange steps may result in

global optimal solutions, it may depend on the number of

parts moved in a step operation. To verify the effective­

ness and efficiency ntimerically, therefore, 400 hypotheti­

cal problems were solved by the Gradient Technique and then

by total enumeration. The grouping objective was the maxi­

mization of H. Both the Gradient Algorithm and entimeration

procedure were coded in FORTRAN IV and run on the Itel com­

puter at the Texas Tech Computer Center. The problems were

generated randomly in the PFA format. There were six machines

for each problem and the number of operations which a part can

have was uniformly distributed between two and four; each ma­

chine was equally likely to perform any one operation.

Page 128: Annrovp - TDL

116

Since only small problems can be solved by enumeration,

the number of parts vary from 7 to 15 and the number of

groups from 2 to 4. The Gradient Algorithm, the number of

parts in a set per step operation, were limited to one in

the transfer but two in the interchange step. The solu­

tions to the first twenty problems have been tabulated in

Table 4.2. For these twenty problems, the Gradient Tech­

nique converged at the global solution when the transfer

and interchange steps were combined. However, only eight

out of the twenty were globally optimal when the transfer

step was used alone. For the 400 problems, 95 percent

were solved globally when both steps were combined, but

about 60 percent when only the transfer step was used.

These results tend to suggest the Gradient Algorithm as

an effective approach. It also shows that the interchange

step is a significant feature.

Also shown in Table 4.2 are the computation times

for the entimeration as compared to the algorithm. These

have been plotted in Figure 4.4. From the data and the

plot it appears that the Gradient Technique is relatively

efficient for the range of problems solved.

In addition to the 400 small problems, larger prob­

lems ranging from 40 to 120 parts (the generation of these

problems is explained in Chapter V) were solved with the

Gradient Algorithm. Only the interchange step was used

Page 129: Annrovp - TDL

Iii S ' l l f f '^''^'^ T° ° ^ .'•ISJ'A'

117 -ini!

14-1 . 0

km

7

7

7

8

8

8

9

9

9

10

10

10

11

11

12

12

13

14

15

o u o. 9 9 m u

2

3

4

2

3

4

2

3

4

2

3

4

2

3

TOTAL ENUMERATION

Time (Sec.)

.020

.110

.140

.050

.360

.710

.100

1.21

3.48

.22

4.06

16.42

.47

13.48

1.02

44.49

144.30

4.84

10.49

Global Solution

(H*)

.634

.702

.554

.703

.745

.733

.701

.723

.764

.705

.724

.736

.688

.704

.673

.693

.699

.675

.668

GRADIENT ALGORITHM

Transfer and Interchange Steps

Time Solution (Sec.) (H)

.010

.010

.010

.010

.010

.020

.020

.020

.020

.020

.030

.020

.030

.030

.030

.050

.060

.040

.060

Transfer Step Only Time Solution (Sec.) (H)

.634

.702

.554 j

.703

.745

.733

.701

.723

.764

.705

.724

.736

.688

.704

.673

.693

.699

.675

.668

.010

.010

.010

.010

.010

.010

.010

.010

.010

.010

.020

.020

.020

.020

.020

.030

.030

.030

.030

.634

.702

.554

.654

.745

.729

.628

.688

.764

.616

.668

.736

.592

.704

.673

.682

.688

.675

.490

Page 130: Annrovp - TDL

118

in order to reduce cost of experimentation since many

problems at this range were solved as further discussed

in Chapter V. A preliminary observation indicates that

it takes over three minutes to solve problems with n > 80

when the interchange step and the transfer step are com­

bined.

The algorithm time appears to increase polynomially

with respect to the ntimber of parts and groups as indicated

in Table 4.3 and Figure 4.5. The dependence of computation

time on number of transfer steps per iteration is one pos­

sible reason; each transfer step requires some amount of

computation time. As discussed in Section 4.3, the ntimber

of transfer steps per iteration depends on ntimber of parts

and ntimber of groups (see Equation 4.8). Thus, algorithm

time increases with ntimber of parts (n) as shown in Figure

4.5a. As n increases above — , some groups will have only ,

one member; the transfer of a part from a group with only

one part results in an infeasible solution. Hence, at

N > , the ntimber of transfer steps per iteration (or com­

putation time) reduces as N increases. Thus, algorithm

time exhibits a concave curve with respect to ntimber of

groups.

The use of the Gradient Algorithm as a tool for

studying the behavior of GT systems will be illustrated

in the following chapter.

Page 131: Annrovp - TDL

119

TABLE 4.3. ALGORITHM TIME VERSUS NUMBER OF PARTS AND GROUPS

• •

Number of Groups, N

2

6

10

14

18

22

26

30

34

38

42

46

50

54

58

62

66

70

i

n = 40

.51

.64

.86

.910

.840

.620

.56

.34

.31

Algorithm

n = 60

1.69

2.32

2.50

2.77

3.29

3.84

4.55

2.68

2.21

1.73

1.53

1.32

1.06

0.98

Time (Seconds) for

n = 80

3.81

4.76

5.66

6.06

6.42

7.83

8.14

7.80

7.68

7.10

6.3

5.16

4.31

3.68

3.46

3.00

2.69

2.42

n = 100

7.98

10.43

11.29

12.09

12.92

13.29

15.51

14.82

17.25

16.21

18.05

17.78

12.84

12.62

10.25

10.25

9.29

7.79

n = 120

13.88

18.03

18.41

19.94

20.96

20.11

22.16

21.66

22.85

28.49

28.86

31.07

30.45

53.31

22.68

22.64

22.64

20.69

Page 132: Annrovp - TDL

150.0

140.0-

130.0.

120.0.

140.0.

100.0-

120

'J 80.OJ c o S 20.0J

03

Enumeration

--•--. Algorithm

14 15

Ntimber of Parts

Figure 4.4. Solution time versus number of parts

Page 133: Annrovp - TDL

121

(a) Algorithm Time Versus Number of Parts

N = 2 N = 10

+ - V + 4 4 N = 30

N = 50

" N = 70

20 40 60 80 100 120

Number of Parts (n)

10 20 30

(b) Algorithm Time Versus Number of Groups

40 50 60 70

Ntimber of Groups (N)

Figure 4 . 5 . Algorithm time versus ntimber of p a r t s and groups.

Page 134: Annrovp - TDL

^ ^ CHAPTER V

NUMBER OF GROUPS AND PRODUCTION COST

5.1 Introduction

As stated in Chapter I, the theme of this research

is development of tools to enable efficient and systematic

study of the behavior of GT systems. Noting that batchshop

problems are extremely complex, it is apparent that a com­

prehensive study of most of the factors that may impact GT

systems will require a combination of several analytical

tools. Such an endeavor is beyond the scope of this study.

However, in this chapter, we wish to demonstrate that the

Gradient Algorithm may be suitable as one of the tools to

study the behavior of GT systems. In particular, the use

of the Gradient Algorithm to study how set-up time and

ntimber of additional machines may respond to changes in .

number of groups will be presented.

5.2 Study Procedure

In Chapter II an inverse relation between set-up time,

S , and group homogeneity H was established. Thus,

S = S^(l-H) t o

where 0 < H < 1 , S is a constant and S the sequence depen-— — ' o ^

dent component of set-up time. Thus, to study any relation

between S. and number of groups, N, it may be adequate to Km

observe how H varies with N,

122

•fafM,

Page 135: Annrovp - TDL

I 123 i

To observe how H or A. number of additional machines

change with N, the following procedure may be desirable;

several batchshop problems collected from different manu­

facturing environments may be coded in the combined formats

of the VZk and PFA. Another alternative is to generate hy­

pothetical problems in the combined formats. For each

batchshop problem select several levels of ntimber of groups,

N = 2 , 3, 4,.... Notice that each level of N corresponds

to one constrained GT grouping problem. Thus, in each

batchshop problem, the Gradient Algorithm can be used to

solve several constrained problems first to minimize H (the

PCA approach) and then minimize A or the alternative L (in

the PFA approach). In each case (PCA, PFA) , both H (must

be defined in terms of design features) and A are computed

and recorded against the corresponding ntimber of groups.

The sets of data (H, N) , (A, N) in either the PCA or PFA

can then be analyzed to verify how H or A responds to changes

in N. Some examples illustrating this procedure will now be

presented.

5.3 Test Problems

In order to illustrate the described procedure eight

hypothetical problems were generated. There were four

levels of number of parts (60, 80, 100, 120) and two levels

of Variety of Parts: low and high variety. Operationally.

Page 136: Annrovp - TDL

124

variety of parts is defined as the proportion of parts with

distinct process routes in the PFA; in the PCA it is the

proportion of parts with distinct Opitz code ntimbers. Let

n be the symbol of variety of parts. Thus, n- .3 was con­

sidered low variety and f2 = .6, a high variety.

An ideal situation in a conventional shop is for two

parts which are identical with respect to design features

to have the same process route. If this were so, then a

solution to the grouping problem in either the PCA or PFA

approaches would have resulted in the same GT system for a

given problem. However, both Eckert [11] and Purcheck [37]

independently observed that this may not be the case; parts

with similar design features may be assigned different pro­

cess routes in large conventional shops. Thus, in the

generated problems, even though two parts may be identical

in Opitz code number, they may or may not be assigned the

same process route.

The design features and process routes of parts were

generated to reflect the complexity of parts in a general

purpose shop. Past research in component statistics [16,

28] shows high frequency for simple features as compared

to complex ones. In the Opitz notation, this means more

lower valued numbers may appear on the digits of the code

number of parts. Ajreanencx count of the numbers on the di-

gits of well over a- thousand code numbers of parts in two case

studies show the following empirical distributions [22, 34].

Page 137: Annrovp - TDL

125

Ntimbers Appearing on the Digits of Code Number of

Parts

0

1

2.3

4,5

6,7,8,9

Relative Frequency

.45

.20

.20

.11

.04

Similarly, the following distribution was observed for

ntimber of operations per part from case studies [37].

Ntimber of Operations Relative Frequency

2,3,4 .50

5,6,7 .35

8,9,10,11,12,13,14,15

There were 40 machine types for each problem. The ntimber

df each machine type was maintained at 1 in one case and

uniformly distributed between 1 and 3 in another case.

Based on the above distributions, the hypothetical problems

were generated in the combined formats of the PFA and PCA.

The parts studied were restricted to rotational parts

because they contribute the majority in batchshops (70 per­

cent to 80 percent of parts) [16, 22, 28]. In addition to

the hypothetical problems, three case study problems were

taken from the literature [22, 34]. A summary of all

problems is presented in Table 5.1. The experiment was

run on the Itel computer at the Texas Tech Computer Center;

the Gradient Algorithm was coded in FORTRAN IV.

Page 138: Annrovp - TDL

126

TABLE 5.1. SUMMARY OF TEST PROBLEMS

Problem

1

2

3

4

5

6

7

8

9

10

11

Number of Parts (n)

60

60

80

80

100.

100

120

120

43

82

86

Variety of Parts (Jl)

.3

.6

.3

.6

.3

.6

.3

.6

1.0

.62

.53

Approach

PCA, PFA

If fl

tt ft

ft ft

ft ft

If If

If tl

M II

PFA

PFA

PCA

Comments

Hypothetical

Burbidge [ 6]

Purcheck [37]

Harworth [22]

Page 139: Annrovp - TDL

127

The sets of data (H,N), (A,N) for each problem have

been plotted in Figures 5.1 to 5.19 and the values have

been tabulated and presented in Appendix C.

5.4 Discussion and Presentation of Results

The observed results will be discussed in two parts.

The first part concerns the relation between H, group

homogeneity, and N, ntmiber of groups; the second will deal

with the relation between A and N.

(i) Relation Between H and N:

The graphs of Figures 5.1 to 5.9 show a concave

relation between H and N in the PCA approach. This obser­

vation, which is true for all the problems solved, may be

explained as follows.

The concave structure tends to suggest the existence

of a specific ntmiber of families of similar parts. We re­

present this number by N*. An optimal solution to the

grouping problem entails the assignment of these families

to respective groups. However, if a constrained problem

is such that N, ntimber of groups, is less than N*, then

the members of a group may be a mixture of parts from more

than one family. Thus, group similarity, H, may become

relatively small. As N approaches N*, more families get

assigned to individual groups. In effect, H is improved.

At N = N* each group contains just one family of similar

parts and H reaches a peak value.

; i , .

Page 140: Annrovp - TDL

128

H

1.0

.8

.6-

.4

.2

0.0

« . « « -«*" PCA

• «

PFA . • • ~ •

• •

• • • •

0 10 20 30 40 50 60 70 N

Figtire 5.1. Group homogeneity versus ntimber of groups; n = 60, n = .6

H

1.0

.8

.6.

.4-

.2

0.0

' PCA

. - • • « • • PFA

% • • •

• • •^UU

0 10 20 30 40 50 60 70 N

Figure 5.2. Group homogeneity versus ntimber of groups; n = 60, ^ - .3

Page 141: Annrovp - TDL

129

1.0

.8

.6

H -4

.2

0,0

* « * **,««« PCA

y Ktf

a •

PFA • . • • - • • • ' ' # • » . »

0 10 20 30 40 50 60 70

n ^ SO ^ = 6 '°'' homogeneity versus number of groups;

rt

1.0

.8

.6-

.2-

0.0

» * • ' - • • * , * »

PCA

PFA ' * • • • • .

• •

0 10 20 30 40 50 60 70

Figure 5,4. Group homogeneity versus ntimber of groups; n = 80, fi = .3

Page 142: Annrovp - TDL

130

H

1.0

.8 .

.6

.4

.2

0.0

»-• •* .. PCA

• »

PFA

0 10 20 30 40 50 60 70

N

Figure 5.5. Group homogeneity versus number of arouDs-n = 100, Q = .6 to . »

1.0

.8

H

•"it PCA ' x»

««

.2

0.0 I

• * • • • •

PFA

• • •

0 10 20 30 40 50 60 70

N

Figure 5.6. Group homogeneity versus ntimber of groups; n » 100, fi - .3

Page 143: Annrovp - TDL

H

1.0

.8

.6

.4j

.2

0.0

« # « » ^'

ar « '

-. PCA « w"" •I'm,

m m

M9\t

. • • • * • • •

PFA • • « • •

• • • • •

131

0 10 20 30 40 50 60 70

N

Figure 5.7. Group homogeneity versus ntimber of groups; n = 120, « = .6

H

1.0

.8.

.6.

.4.

.2j

0.0

IC^'JI- PCA

PFA

10 20 30 40 50 60 70 N

Figure 5.8. n = 120, fi

Group homogeneity versus number of groups;

= .3

Page 144: Annrovp - TDL

132

H

1.0

.8

.6

.4

.2

0.0

" • " ^ " "

, - ,»««Jf«ir«*?^^ * *

ar » ar

ar M

ar

X K • i f

<

0 10 20 30 40 50 60 70

N

Figure 5.9, Group homogeneity versus ntimber of groups; n = 86, fi = .53 (Harworth's problem).

Page 145: Annrovp - TDL

133

At N>N* additional groups may be created only by

taking members from the families. In doing so, only the

members with less relation to their respective families

may be removed. The group homogeneity of the established

families improves as a result, but that of the newly created

groups may be poor. The net effect may be a slow reduction

of H as N slightly increases beyond N*. This may be respon­

sible for the flattened appearances of the peak portions of

the concave curve (Figures 5.1 to 5.9). The average number

of parts per group ( /N) may also be responsible for the

decrease of H for N>N*. As N increases at this range, some

groups may have only one member. By definition, H, = 0

when Uj = 1. Hence, as N approaches n (the ntimber of parts)

H approaches zero. In Section 5.7, it will be illustrated

that, from cost point of view, the smallest N which yields

a high value of H may be better than N*.

In the PFA approach the concave relation between H

and N is not so pronounced as depicted by the graphs of

Figures 5.1 to 5.9. This may be because A (ntimber of addi­

tional machines) instead of H (which is defined in terms of

the attributes used in the PCA) was optimized.

(ii) Ntimber of Groups and Additional Machines. The

graphs of Figures 5.10 to 5.20 show that the ntimber of ma­

chines increase with ntimber of groups. This is because as

N approaches 1, most of the parts belong to the same group

and few machines are duplicated.

Page 146: Annrovp - TDL

134

200 <

150 •

100 -

50 .

0

• *

^ * • • ' f t

• • • - f t

« • • ar

• - •

' ' • 1 • • , . I

0 10 20 30 40 50 60

N

70

Figure 5.10. Ntimber of additional machines versus ntimber of groups; n = 60, ntimber of each machine type = 1, PFA.

300

250 -

200 -

150

100

50

• - •

<b ^ •

9.. .* • •

• ft IT

ft

. ^ -^ ft ft g ,

ft «

^ •

K m

. ^M

• ar

^ I I t • • .

0 10 20 30 40 50 60 70

N

Figure 5.11. Number of additional machines versus ntimber of groups; n = .80, number of each machine type = 1, PFA.

Page 147: Annrovp - TDL

135

Figure 5.12. Number of additional machines versus number of groups; n = 100, number of each machine type = 1, PFA.

A

300

250-

200-

150-

100-

50-

jr X

• • ft*

ft -<o •

• ft

ft

« «

• «

« f t . *

«

0 10 20 30 40 50 60 70 N

Figure 5.13. Number of additional machines versus number of groups; n = 110, ntimber of each machine type fi = 1, PFA.

Page 148: Annrovp - TDL

136

F i g u r e 5 .14 , Ntimber of a d d i t i o n a l machines v e r s u s ntimber of groups ( B u r b i d g e ' s ) ; n = 43 , fi = 1 .0 , ntimber of each machine type = 1, PFA,

300

0 10 20 30 40 50 60 70

N F i g u r e 5 . 1 5 . Ntimber of a d d i t i o n a l machines v e r s u s ntimber of groups (Purcheck ' s) ; n = 82, fi = . 4 5 , number of each machine type = 1,

Page 149: Annrovp - TDL

137

500

400 ^

300 •

200

100 -

PCA « «

« « K i t ai « « ftjiV ft « a a

< • ! * > l l NT

I k » »

• * • • >

• • « « PFA

0 10 20 30 40 50 60 N

70

Figtire 5 .16. Ntimber of addit ional machines versus ntimber of groups; n - 60, fi = . 3 , ntimber of each machine type: uniformly distr ibuted (1 , 3 ) .

100 -

Fi- ure 5 17 Number of additional machines versus number of groups; n = 60 fi - .6; number of each machine type; tiniformly distributed (1, i) .

Page 150: Annrovp - TDL

138

500

400 .

A 300 ,

200 .

100 .

,*•» "ft

^pg4^„„'*«*««—*

• • • •

• • • PFA

. « . . . » * » « *

0 10 20 30 40 50 60 70

N

Figure 5.18. Ntimber of additional machines versus ntimber of groups n = 80, fi = .3; uniformly distributed (1, 3) ntimber of each machine type.

500

400 .

300 -

200

100.

JC*

JIfft

PCA ^ ,

PFA ..

• •

0 10 20 30 40 50 60 70 N

Figure 5 19 Number of additional machines versus l ^ l r of groups; n ^ 80, fi. -6; uniformly distri-(1, 3) number of each machine type.

^ i i ^ j H ^ ' l

Page 151: Annrovp - TDL

DUU

4 0 0 .

3 0 0 .

200-

1 0 0 -

0

PCA

, a - * -

PFA . # • • • <* •

ft • * *

. -

. f - . ^ r , " ' : " ' , , ,

0 10 20 30 40 50 60 70 N

Figure 5 .20. Ntimber of addit ional machines versus ntimber of groups; n .= 100, fi - . 3 ; uniformly d i s tr ibuted ( 1 , 3) ntimber of each machine type.

500

400.

300

200.

100

Figure 5 .21 . Ntimber of addit ional machines versus number of groups; n = 100, Q = .6; tiniformly d i s tr ibuted ( 1 , 3) number of each machine type.

Page 152: Annrovp - TDL

140

500

400

300

200

100-

0 10 20 30 40 50 60 70

N

Figure 5.22. Ntimber of additional machines versus number of groups; n - 120, fi = .3; uniformly distributed (1, 3) ntimber of each machine type.

500

400.

300.

200

100"

PCA,-' m* «• »^

,XJ*« • • •

PFA

** - • • • • •

0 10 20 30 40 50 60 70 N

Figure 5.23. Number of additional machines versus number of groups; n = 120, fi " -3. uniformly distributed (1, 3) number of each machine type.

L\i.V

Page 153: Annrovp - TDL

141

On the other hand, as N approaches the number of

parts, n. many groups may have only one member. This means

that, in the Exclusive Membership GT, theoretically each

part is processed by an exclusive set of machines at N « n.

The number of additional machines will be maximal in this

situation. It follows from these extreme cases that, for

l<N<n, every additional group may result in more sets of

parts which, being processed by the same machine in the

conventional system, now belong to different groups in

the Group Technology situation.

One Important observation is the large ntimber of

additional machines resulting from relatively small changes

in ntimber of groups in the PCA approach. For instance, in

Figure 5.22, there is an increase of ten additional machines

for every additional group created (4£N£20) in the PCA.

This tends to suggest that, in the GT approach, large ntimber

of groups may not be economically viable; the cost of ma­

chines will likely offset any gains that may result from the I

reduction of machine set-up times. Even if the ntimber of

groups is small (say 8) , it may be necessary to re-plan the

process route of some members of the groups so as to reduce

machine duplication. This fact was mentioned by some GT

researchers [6, 7].

From the plotted curves of Figures 5.10 to 5.23, a

linear relation tends to hold between A and N. This

i&A^

Page 154: Annrovp - TDL

142

linearity appears pronounced in the cases where ntimber of

each machine type is one (Figures 5.10 to 5.15); the curves

deviate from linearity at the higher values of N in those

cases where each machine type is uniformly distributed

(see Figures 5.16 to 5.23). The cause of the linear re­

sponse of A to N is not obvious to the author.

As expected, there are fewer additional machines in

the PFA as compared to the PCA approach for the problem.

This observation which is true for all the test problems

supports the effectiveness of the Gradient Technique. Re­

call that it is only in the PFA that A is minimized. The

curves of Figures 5.10 to 5.23 also indicate higher number

of additional machines for the high variety as opposed to

low variety problems since high variety of process routes

increases the tendency for the duplication of machines. In

the following sections these results will be used to explain

possible behavior of additional machine cost, set-up cost

and production cost with respect to ntmiber of groups.

5.5 Additional Machine Cost and Number of Groups

Let A(N) be the ntimber of additional machines in

terms of number of groups; A^(N) , the corresponding cost

per schedule period. Thus

A (N) = r^C^A(N) (5.1)

Page 155: Annrovp - TDL

143

where C^ is the unit cost per machine and r, is depre­

ciation rate of machines per schedule period. If it is

assumed that C^ and r^ are constants, then A (N) and A(N)

may exhibit the same behavior. The curves of A (N) may,

therefore, be similar to those in Figures 5.10 to 5.20.

To illustrate, the resulting additional machines in the

solution to Problem 5, Table 5.1 is used to compute A (N)

where C - $20,000 and r v - .01 per schedule period (see

Figure 5.24). It is clear, from Equation (5.1), that if

C changes with the number of groups, then the behavior a

of A (N) will deviate from linearity.

5.6 Set-Up Cost and Ntimber of Groups'

Similarly, let S (N) be the total set-up time with Km

respect to ntimber of groups and S (N) the corresponding

cos t .

S^(N) = CgS^(N) (5 .2 )

From the set-up time model of Equation (2.18) and denoting

the sequence independent component as S^, then total set­

up time S (N) may be expressed as follows: Vm

S^(N) = S^[1-H(N)] + S^

Substituting S^(N) in Equation (5.2),

S (N) - C rS^(l-H(N))) + S^] (5.3) c s «J

Page 156: Annrovp - TDL

144

50

45

40

35 -o o

^ 30

% 25 I o CJ

20

15 ,

10 .

5 •

0

A^(N)

^..^.^^^-^l-^*'"''

10 15 20 25 30 35 40 45 50

Ntimber of Groups (n)

Figure 5.24. An example of possible relation be­tween set-up cost, additional machine cost, produc­tion cost and number of groups; n - 100, fi = .3, ntimber of each machine type - 1, GT approach =» PCA.

Page 157: Annrovp - TDL

145 ^ ^

S^(N) may be a convex curve since 0 < H(N) < 1. Using the

values of H(N) of the solutions to Problem 5 in Table 5,1,

and setting

Cg = $15 per hour per schedule period

SQ = 2000 hours

and S^ = 500 hours.

The function S (N) is illustrated in Figure 5.24.

As N approaches 1, even if families of similar parts

exist, both similar and dissimilar parts may belong to one

group; changeover time or set-up cost may be relatively

high at this range. An increase of N from a very low value

may result in the assignment of highly similar parts to

production groups. Set-up cost may, therefore, decrease

until the number of production groups equal the ntimber of

families that exist. As N increases further from this

point, fewer similar parts may remain in a group to share

the same machine set-up. Consequently, set-up cost may

increase.

5.7 Production Cost and Ntimber of Groups

In Chapter II, it was discussed that additional ma­

chine cost and set-up cost may be the critical components

of production cost in a GT system. Hence, the relation of

production cost to number of groups may be a combination

Page 158: Annrovp - TDL

146

of those of S^(N) and A (N) L&t v rK\ A^ ^ . Q\ / V* A^\iij . Let F^(N) denote production

cost in terms of N. Thus,

P e W = S^W + A^(N) (5 4j

where l<N<n. In the cases where S^(N) is convex,

P^(N) may also be convex since A^(N) may be an increasing

function. Depending on the relative values of S (N) and

A^(N), optimal number of groups, N*, may exist. This is

shown in Figure 5.24 where P^(N) corresponding to Problem

5 of Table 5.1 is plotted.

However, if costly machines form the bulk of addi­

tional machines the unit cost of machines [C in Equation

(5.1)] will be large. The curve A^(N) in Figure 5.24 will,

therefore, shift to the left causing the cost for additional

machines to be the dominant factor of production cost, P (N) .

In this situation, the ctirve, P (N), may increase monotoni-

cally making the GT approach unprofitable for any ntimber

of groups.

Also, the profitability of the GT approach may depend

on the value of S^ [see Equation (2.18) or (5.3)], the

maximtim possible set-up time that can be reduced by the

grouping of parts. If S is large, and family of similar

parts exist, set-up cost reduction will be large enough

to offset the cost of additional machines. A large batch-

shop where families of highly similar parts exist may be

Page 159: Annrovp - TDL

147

ideal for GT; this situation favors large values of

So-

From the observed results (Figures 5.1 to 5.24), it

appears that the formation of small number of production

groups may be better than large ones. Small values of N

tend to yield lower values of set-up cost, S (N), [or

higher values of H(N)]. Large values of N will increase

cost of additional machines without any additional reduc­

tion in set-up cost.

5.8 General Observations

From the results of this study some general obser­

vations concerning the feasibility of changing over to GT

systems from the traditional arrangement have become ob­

vious. In the discussion of the set-up time-similarity .

of parts model one of the asstimptions was that all parts '

in a group use the same set of machines. This means that

parts in a group be similar not only in design character­

istics but also in process routes. Now, consider a situa­

tion in the PCA where this assumption is relaxed; that is,

parts which are similar in design characteristics are pro­

cessed by different machines. Set-up time for every opera­

tion in the group will be sequence independent. No set-up

time reductions due to grouping can take place.

NiAi^i.

Page 160: Annrovp - TDL

148

The same argument holds for the PFA. The grouping

of parts which are similar in process routes but dissimi­

lar in set-up related characteristics may not result in

set-up time reduction. This is obvious from the set-up

time-similarity of parts model presented in Chapter II.

This suggests that both the PCA and PFA GT approaches may

be viable alternatives only if a set of parts similar in

the combined set of characteristics exist in the conven­

tional shop. Thus, it appears reasonable to infer that,

if a conventional system is suitable for GT production,

then the PCA, PFA or even the combined approaches may not

differ significantly as previous researchers tend to claim

[6, 12, 16]. An optimal partition in the PCA approach may

be the same solution in the PFA as well as in the combined

approach.

5.9 Possible Application of Research Results

The results of this research may be of value both

for further theoretical studies as well as industrial ap­

plications. As demonstrated in this chapter, the Gradient

Algorithm may be used as a tool to investigate the behavior

of GT systems under various manufacturing environments. As

stated in Chapter I, there is a need for experimental stu­

dies of the behavior of Group Technology productions

systems.

Page 161: Annrovp - TDL

149

The same procedure outlined in Section 5.2 may be

used to perform a feasibility study of a particular indus­

trial batchshop problem in order to determine the possibil­

ity of implementing GT. Grouping parts in the PCA or PFA

approach possible set-up cost savings and additional machine

cost may be determined. This will be based entirely on the

prevailing conditions of the problem which is in contrast

from the currently practiced "rule of thtimb" philosophy

that what works for company A will also work for company B

[45]. Considering a planning horizon, the determined set­

up savings, additional machine cost and other expenses may

then be used in a cost-benefit analysis. The decision to

adopt or reject GT system can, therefore, be made in a more

objective manner.

Using a graphical approach, as shown in Figure 5.24,

the procedure of Section 5.2 may also be used to determine

optimal ntimber of groups for a particular problem.

The system of characterizing GT solutions discussed

in Chapter III may enable a GT system designer to decide

preliminarily which problem may be good for GT production.

For instance, if Multiple Type I (H=l, L=0) or Natural

Type I (H=l, 0<L<1) were identified then the designer may

infer that GT may be an attractive alternative. An expla­

nation was presented in Section 3.3 of Chapter III.

Page 162: Annrovp - TDL

CHAPTER VI

CONCLUSIONS

6.1 Summary of Research

This research dealt with the problem of grouping

parts for GT production systems. The grouping problem

was formulated as a combinatorial optimization problem;

the different classes were identified, its relation to

Cluster Analysis discussed, and the current state of the

art in the GT approach reviewed.

In an attempt to solve the grouping problem effi­

ciently, a criterion of optimization was formulated.

Also, possible forms of solution to the grouping problem

were characterized and a grouping algorithm as a solution

technique presented. Using this algorithm as the tool,

a procedure for studying the behavior of production cost

in a GT system was outlined. To illustrate this procedure

the possible effect of ntimber of groups on production cost

was investigated.

Finally, study results were discussed and possible

applications of research information suggested; the con­

clusions and recommendations for further research follow.

6.2 Conclusions

In the analysis of the grouping problem some conclu­

sions can be drawn concerning grouping criterion, optimality

150

Page 163: Annrovp - TDL

151

conditions, a grouping algorithm and ntimber of groups

These are discussed in order.

1. Grouping Criterion:

(1) The relation between set-up time and the simi­

larity of parts discussed in Chapter II suggests that the

maximization of group homogeneity, H, and the minimization

of set-up time, S^, are equivalent grouping criteria. The

models presented in Equations (2.8) and (2.18) support

this conclusion.

(ii) A solution to the grouping piroblem may be char­

acterized by the relative values of group homogeneity, H,

and the link between groups, L. A solution with high value

of H and low value of L is a preliminary indication that

the resulting groups may be suitable for GT production.

This observation was explained in Sections 3.2 and 3.3 of

Chapter III. On the other hand, simultaneous occurrence of

high values of H and L, or low values of H and L indicates

that the solution may not be suitable for GT production.

The universal population problem with H = 1 and L « 1 and

the null-relation population problem (H = 0, L * 0) are

good examples of unsuitability to GT.

Page 164: Annrovp - TDL

152

2. Optimality Conditions:

The existence of optimality criteria may

influence the type of solution technique that can be

developed. For instance, in cases where optimality con­

ditions are not known the only way to ensure an optimal

solution is by entimerative technique which can be inef­

ficient. In the analysis of the GT grouping problem,

it has been observed that optimality conditions exist

for some classes of problems. In the PFA approach, a

global optimal solution in the Multiple Population prob­

lem requires that L be equal to zero. In the PCA approach

a global optimal solution for the Multiple Type I is char­

acterized by H = 1. These observations have been explained

in Section 3.4.

3. Grouping Algorithm:

The Gradient Algorithm may be used to solve

the grouping problem in either the PCA or PFA as shown

in Theorem 4.1 and verified by the numerous examples of

Chapters IV and V. It may also be a suitable technique for

studying the behavior of Group Technology systems. This »

has been demonstrated in Chapter V.

Page 165: Annrovp - TDL

153

4. Ntimber of Groups:

(1) The number of additional machines required

for the Exclusive Membership GT problem depends on the

ntimber of groups; additional machines increase with ntimber

of groups. This observed relation is supported by the

results of all the twenty-four test problems discussed in

Section 5.3. It is also consistent with observations made

from empirical examples by past researchers [6, 45]. In

the case where ntimber of each machine type is one, the

relation between additional machines and number of groups

may be approximated by a linear model as shown in the

graphs of Figure 5.10 to 5.15. A straight line fit appears

obvious by inspection.

(ii) In a situation where families of parts exist

and each constrained problem is solved optimally, the re­

lation between set-up time and number of groups may be ap­

proximated by a convex curve in some cases. It follows

that if unit set-up cost is constant, then total set-up

cost may also have a convex relation with number of groups.

Explanation to these observations have been given in Sec­

tions 5.4 and 5.6.

From the above stated observations, it appears rea­

sonable to add that number of groups is an important para­

meter in the GT system. The relation between number of

groups and additional machines, number of groups and set­

up time appear significant in all the observed cases.

Page 166: Annrovp - TDL

154

6.3 Recommendations for Further Study

1. Because of resource and time constraints, at­

tention was focused primarily on the Exclusive Membership

GT problem using the Gradient Technique. It is apparent

that the method of this research may be used to investigate

^^^ Non-Exclusive Membership and the Hybrid QT jr-rnnp-fp

problems. It is recommended here that these alternatives •rwi^wfWMW*^

be studied using either the Gradient approach or other ap­

proaches like Integer programming, Dynamic programming

and Branch and Bound.

2. The Gradient Technique established in this

study may be combined with discrete simulation languages

in a general methodology for studying several factors of

GT production. For instance, such a methodology may be

suitable for investigating the possible impact which pro­

duction control functions may have on GT systems. The

development of such a methodology was not pursued because

of resource constraints. Its development may be worthwhile

not only for further investigations but also for a compre­

hensive design of GT systems. The combination of the Gra­

dient Technique (or other methods) and discrete simulation

languages may enable the comparison of several GT alterna­

tives (including no GT approach) for any given problem.

In such a methodology, the Gradient Technique may be used

to evaluate several aspects of the production control

functions.

Page 167: Annrovp - TDL

155

3. The use of weighting parameters in order to

reflect the relative importance of attributes of parts was

demonstrated in the PCA approach (see Section 2.3). An

investigation of how weights may be assigned to machines ""r^'^Tn—\w\^ -,m •!»

in order to properly reflect ^h. oi^n'Yr immrrnnrr _nf

machines in the PFA grouping problem m^^ fc^ IIP- JJ!

Weights may be defined in terms of cost or machine loads.

4. In Chapter V it was illustrated that an opti­

mal number of groups exist for some GT problems. The re-

lations between the number of groups and the cost of addi-

tional machines, as well as b^tween^^j^e^ntmb^

^^^ SQt-up costs, may be used to formulate a model for

analytic dp prrg-irtaMnn of optimal ntmber, of groups. Devel­

opment of such a model was not pursued because a more ex­

tensive experimentation will be required to determine the

parameters of the relation between number of groups and

production cost. A study of such a model is reconmiended.

5. The central focus of this study has been the

batchshop machine set-up problem. It is obvious that the

processing times-based scheduling problem is also important.

For sequence-dependent set-up times, a solution to the set­

up problem may not imply a solution to the processing time-

based scheduling problem and vice versa. It appears that

tbjB-xelatiY^ values of processing and set-up times per

operation mav be critical in the decision asto which

wrn^i

Page 168: Annrovp - TDL

156

problem to solve: the set-up problem as pursued in the

GT approach or the scheduling problem as sometimes pursued

in the conventional arrangement. In a batchshop where

processing times, relative to set-up times, are the domi­

nant components of schedule time, it may be better to solve

the processing time-based scheduling problem. On the other

hand, if set-up time per operation is much higher, then

the GT approach may be a better alternative. This suggests

that some critical ratio of processing time/set-up time

per operation that makes the GT approach the better alter­

native may exist. The study of such a critical ratio is

recommended.

6. Throughout this study only similarity of parts

(defined in terms of process routes or material type, size

and shape, accuracy requirement and features to be processed)

has been considered in the grouping process. While this

approach is ideal for the machine set-up problem, considera­

tion of additional parameters may be necessary for the

batchshop problem. The grouping of parts with consideration . t I ini[-ii mil " — " ^~~-' •" "" ' ' ' •— ' ' ' " " *"" I •ai imiM.i ^

to^^tha^vnlnmp of ln4toy[uaLa§rt ^ ^ ^ ^ o^^ example. The modi­

fication of the grouping procedure developed in this study

such that parameters like voltime, due dates, value of

parts, etc., are taken into consideration may be worthwhile.

Page 169: Annrovp - TDL

REFERENCES

1. Alley, Lee, "Ranking Group Characteristics by Rela­tive "Typicalities and Subject Assignment Accord­ing to Largest Group Similarity," Unpublished Paper, University of Nebraska, Dept. of Computer Science, November 1970.

2. Abou-Zied, Mohammed Raafat, "Group Technology and the Manufacturing Systems of the Jobshons," In­dustrial Engineering, Voin32, May Vil^

j 3. Anderberg, R., Cluster Analysis for Applications, Academic Press, New York, 1963.

X 4. Bergen, Jay H. , "Parts Classification as a Basis for Programmed Process Planning," Hertec Corp, 1975.

5. Bonner, R. E. , "On Clustering Techniques," IBM Journal, January 1964.

.6. Burbidge, John L. , The Introduction to Group Tech­nology , John Wiley & Sons, New York, 1975.

<7. Burbidge, John L. , "Production Flow Analysis," The -\ Production Engineer, 42, 12, 1963.

g 8 Carrie, A. S. , "Ntimerical Taxonomy Applied to Group Technology and Plant Layout," International Jour­nal of Production Research, 11, 4, 1973.

9. Craven, F. W. , "Some Constraints, Fallicies and ^ Solutions in Group Technology Applications, 14th International Machine Tool Design and Re­search Conference, 15th Proceedings, Burmingham England, MacMillan Press Ltd., London, 1974.

10. Dedich, W. I., Soyster, A, and Ham, ^-' '' he Opti- . mal Formulation of Production Group Flowlmes, Proceedings NARARA-II, Second North American Metal Working Research Conference, 1974.

11. Eckert, Roger, L. , "Codes and Classification Systems," American Machinist, December 197 ),

157

Page 170: Annrovp - TDL

158

''• "'""inal^ks'-VEffe^^^^^^ ' " "^-Po-nt Flow System Design " Irodultiorp"^'^ '° Production ^ 1972. ir-roduction Engineer. 51, 5, May •"

13. Edwards, E. A. B "The Family Grouping Philosophv " InternationaLKProduction L...J.U ^ 3 igrT' -

14. Elgomeyel, Y. I., "Grou^ Technology and Computer Aided Progratmning for Manufacturing " SME 20 5m Ford Road, Dearborn, Michigan 48128; 1973.'

15. Fisher, D. W, "On Grouping for Maximum Homogeneity " American Statistical Assod.^.'nn T......r December

'16, Gallagher, C. C. and Knight, W. A., Group Technology. Butterworths, London, 1973. ^ '^

17. Gallagher, C. C. and Abraham, B. L. , "A Factory Com­ponent Profile for Group Technology," Nachinerv and Production Engineering. 28 February 1973. '

18. Gombiniski, J., "Fundamental Aspects of Component Classification," Annals of the CIRP, Vol. 27, 1969.

>19. Gottfried, B. S. and Weisman, J., Introduction to Optimization Theory. Princeton-Hall, Inc., Engle- ' wood Cliffs, 1973,

V 20, Gupta, J. N. and Dudek, R. A., "Optimality Criteria for Flowshop Schedules," AIIE Transactions, Vol. 3, 3, September 1971.

'21. Hartigan, J. A., Clustering Algorithms, John Wiley & Sons, New York, 1975.

— 22\ Harworth, E. A. , "Group Technology Using the Opitz System," The Production Engineer, January 1968.

23. Houtzeel, A., MICLASS, A Classification System Based on Group Technology; MA 75-721.

'24. Iwata, K. and Takano, K. , "Cost Analysis of Process Planning in Integrated Manufacturing Systems," International Journal of Production Research, 15, 5, 1977.

Page 171: Annrovp - TDL

159

25. Jardine, N. and Sibson, R., Mathematical Taxonomy. John Wiley & Sons Limited, tiew York, 1963.—

26. Jensen, R. E., "A Dynamic Programming Algorithm for Cluster Analysis," Operation Research*, 17, 1034-1057, 1968. ^ -

27. Knight, W. A., "Component Classification Systems in Design and Production," Production Technology, 33, 4, 1972. ^

28. Koloc, J., "The Use of Workpiece Statistics to Develop Automatic Programming for NO Machine Tools," Inter­national Journal of Machine Tools Desien Research, 5, 65-80, 1969.

29. Kruse, G. , Swinfield, D. G. J., and Thornley, R. H. , ^ "Design of Group Technology Plant and Its Associa­

ted Production Control System," Production Engi­neer, July/August 1975.

30. Leonard, R. and Koenigsberger, F. , "Conditions for the Introduction of Group Technology," International Machine Tool Design, 13th Proceedings, Burmingham England, MacMillan Press Ltd., London, 1972.

V 31. Merchant, Eugene M. , "Future Trends in Manufacturing -Toward the Year 2000," Annof of^CIRP, 25, 2, 1976.

V 32. Middle, G. H. , Connally, R. , andThtoiley, R. H. , "Organization Problems and Relevant Manufacturing Systems," International Journal of Production Re­search, 9, 2, 1971.

33. Milrofanov, S. P., Scientific Principles of Group Technology Part I, National Lending Library for Science and Technology, Boston SPA, 1966.

Ni4. Opitz, H. and Weindahl, H. P. , "Group Tecjinology and ^ ^ Manufacturing Systems for Small and Medium Quan­

tity Production," International Journal of Pro­duction Research, 9, 1, 19/1.

V55. Opitz, H., Eersheim, W. , and Weindahl, H ?- "^o^^" ^ piece Classification and Its Indjjstrial Applica­

tion," International Journal of Machine Tool De­sign and Research, 9.39- )0, 190^.

Page 172: Annrovp - TDL

160

'36.

»/42;

pxeces. Fart I, Pergamon Press, 1970.

37. Ptircheck G "A Mathematical Classification as a

??oi rf?T '^S* °*^i8? of Group Technology Produc- /, i P tion Cells," .Tmimnl of rymbBi.maLlL,ii. d ^ \uni -PntLJ^

V38. Rajagopalan, R. and Batra, "Design of Cellular Pro- ^ 1 ^ 5 duction System, A Graph Theoretic Approach," ?nter- ' national Journal of Production Research, 13, ^7^

^39'. Ross, G. J. S. "Minimum Spanning Tree," Applied Sta­tistics, 18, 103-104, 1969. ^^

v40. Ross C. J. S., "Single Linkage Cluster Analysis." Applied Statistics. 18, 106-110, 1969.

,/4ll Rubin, Jerrold, "Optimal Classification into Groups, An Approach to Solving the Taxonomy Problem," Journal of Theoretical Biology. 15, 103-144 1967.

Sage, Andrew P., Methodology for Large Scale Systems, McGraw-Hill, New York, 1977.

43. Shue, Li-Yen, "Sequential Application of Simple Sche­duling Rules," An Unpublished Dissertation in In­dustrial Engineering, Texas Tech University, Lubbock, Texas, 1976.

•^44'. Sneath, P. H. A. and Sokal, R. S. , Ntimerical Taxonomy, Principles and Practice of Ntimerical Classification, W. H. Friedman 6e Co., San Francisco, 1973.

45. Shultz, D. and Ostwald, P., "Cost Estimating for Stra­tegical Decisions in Manufacturing Group Classi­fied Designs," ASME Paper, 74-DE-7, 1974.

46\ Solaja, V. B. and Urosevic, S. M. , "An Integral Con­cept of Group Technology," SME Technical Paper, 1971.

> 4f. White, C. H. and Wilson, R. C, , "Sequence Dependent Set-Up Times and Job Sequencing," International Journal of Production Research, 15, 2, 1977.

Page 173: Annrovp - TDL

APPENDIX A

LISTING OF FORTRAN IV PROGRAM OF THE GRADIENT ALGORITHM

161

iifiilrf'riir

Page 174: Annrovp - TDL

\

APPENDIX A

PROGRAM LISTING

The following program

1) generates hypothetical batchshop problems in

the combined PCA and PFA format

2) computes similarity coefficients for either

the PCA or PFA

3) optimizes either H or L

4) and computes number of additional machines.

162

VARIABLES

NOP

C

N

NUMG

W

NOM

ntimber of operations

cost of machines

ntimber of parts

number of groups

relative importance of characteristics

ntimber of machines

SUBROUTINES

PINDEX: this subroutine accomplishes the Preference

Index Heuristic

TSTEP: this accomplishes the Gradient Algorithm

using only the transfer step

PCASIM: computes similarity coefficients with set-up

related characteristics

MMkk.s.

Page 175: Annrovp - TDL

163

fn^TX \*\ IV 1 L^VTL ?1 '*\V "J'^Tz a M32^

0002 2301

0?»T»

CC«^?N/U<I /S I' FR (I 30 f 130 ) , .rt I "'OJ , « l 70», 2VL»l30) ,MPI5(70) , IB' iF*f l? 'J ) , •>oipTS(TO»l23).N,NU.^-.fICTJ.ITC?,Mnvc^,jjj,rT

0005 0005 0007 n'^^^ 0 0 0 1

oon 00 11 0012 0013 0014. 0015 OOIS 0^17 001? 00 1? 0020 ' ' 021 0022

0023 0024' 00 ' 5

0027 OP'S 002"? 00 ?0 •"»0 •> I no 32 0013 10 3'* rir)":^ 00 35 nn-i? 00 3^ 00 3P nc.^'i 0041 01^2 0043 004.4

c r ^ >., r ^ ^

C

r ^ ^ J s l MCiMJ nOTT*»T7' J TC«?J=2 '•E^^^S CP'^f'^ir.c L TFl.A';»l .•'!gA"S JSE FJNCTIPV X IPLAG*. ' '*e».*'^ 'JS5 PMNrtTO*! ^

<FL4G=0 «€*• !$ 7M«= 3PA A03P04CH KCLA'^al ^?A*'S THP 9C4 4PO'^4r-<

M'J^*?«2 M«70 ^IU^l»? MijM 2a ? Nr*«a40 C P ^ V * . * IFLAfJ ' l ITCOsO

10=0 I X * 7 5 9 3 rn '3 j»2

?90 F r R M A T f S t l )

?0 2

5 22

.*<0

or» 302 I « l » 4 5 W ( I ) = l . O on 5?2 t»lf^' '^'« C-ILL «»AN0U( I X , V C D

«f n s V = L * 3 . o > i . o PFAo 3 0 3 , ( : ( n . i « i . ^ ' ' 3 ' i )

CALL ^ANOUf r x , Y ! : L ) PaVPL CiLL 9A^•nn( TX.YSL) I F ( Y P L . ' ; T . . 2 0 ) r ; p T-" S'15 < s P * 3 0 0 .3 * ' 'O0 . 0

c 'n=< i r T"? .*« 04

K !<aP*«,0.* ' l .T cr ! )= '<

f.nu "o iMT *'^*'»I fCf I ) t,of c r o M ^ T ( ! l O , r i 5 . o )

X='*! »in'.sC " ' •" / ** x ^ l \ ' » ? . o >|rMt a»'n'4+"y xMf7M«Mn'* P l » . 5 F 2 » . 3 5 P 4 a . 4 5 P 9 » . ^ * FC=...^5 F O * . ^ ' '

Page 176: Annrovp - TDL

164

F^on.^M IV z i.rvFL 21 Map,

^^^^ IF ( 10 )<?00, 3 0 0 , 5 0 1 0046 qno on 302 T « 1 , * J - » M I W 4 7 30 TO 304 004^ ooi on 303 r»i,MO't 5049 w f n « c i n 2*'53 303 IF(C(rj ,L5.S0.)W{rj»l .0 00^1 "^nu Cn»'TTMUP 00^2 3 0 7 FnoMAT(F lO.O)

C GENc^ATF OtST lMPr o^-jTrc C

^053 or 10 t= l ,MOP '^'?*'* CALL PANDiJ( l y . v P L ) C0^5 I F f Y P L - F l ) ' 5 F , « « ? , S 6 0056 55 Xt»a3.0 0057 Y P « 2 . 0 00 5" GH f n 5 " O'J^^ 5<S I F r Y £ L . G T . F 2 } ' ; 0 TO 5* OOftO XP-«3.0 OOftl Yt»a5,o 00*!? vO TP '•9 00*'3 5a XPssi.o 00 64 YP=<?.0 •'^065 SQ CALL "Ai^lOUMX.VPL) 00^5 NCt>=YFL *X '»*Y ' 00S7 CALL "ANDLK r x , Y P L ) 00*.9 KaYPL*X!Mr'«+l ,0 00/S9 ^o/ \pT(r , K ) a l 0070 . MPP=N0P-1 OO-'l • OP Ll J a l . M P P 0077 30 CALL "»1N0',j( I X , Y £ L ) 0073 '<lsYEL*XV':".»* i . o no7v i F f K i . E o . K ) :;o r p ?o 00 •'5 KsKl 007S 11 M049T(r , . i < l )a l 0 0 7 7 10 COl^JTIM!Je

'^^r•• » ' l l ? * .

c JENE^ATE PCUTES PpR PE '^ IMIMG OA.?" "

00' 'S XTaMOP Ony^ MMs.MPP+1 00-30 00 21 IaMM,M 0031 C iLL aAMpiJf r x , Y E L ) 003? K a v p ^ ^ Y T + l , T 00«3 OP 22 ; < l » l , M T * 00^4 22 '^OAPTf r , X l ) =«PaffTC«:,.< n 00!»<5 2L CP''"''I^!i/^

- " OF^'E? ^TIG*! OP 3 C i N'PP'JTE",

C GPVEOATP " ^ f S T OIGIT ATT^ I'"jT = 'f / •

00?.b or 500 I ^ l f M O " O0.?7 CALL PAMOU( TX ,Y=L) 0 0 « " 500 >*01PT( T , 4 1 ) aYPL** i .3

C GE"E'ATE AT- ' IT 'PS P C ' F ^ A P l t N O "^IJI"'"'^

OO?,? OP 50 1 I= l , ' l '^w

Page 177: Annrovp - TDL

165

pnoT^AM TV -; LPVPL 21 'J»MV . n^pr , , 1 , 7 4

00<»3 OP 5 0 1 J « ? , 7 0001 CALL P4Nn'l( I X , Y F L ) 0092 i r r v E L . L E . F A l G n TT «;o? OOOl I F ( Y E L . L E . P 9 ) G n TO 503 0094 I F f Y E L . L S . F r i GH TP 504 0OO5 I F f Y E L . L E . F O I G P TO 505 0095 CALL 9ANnU( TX,YEL) 0C97 *^P4RT(T ,40«'J)avSL*4.0+«.0 noaq (jn T-> 501

0103 'ir rn - o! 0101 503 -'OAi Tri ,J>40)al 0132 GO IP 501 0 1 0 3 «?04 CALL PA IOIH IX,VPL) 0 1 0 4 ^OAOTd , j * 4 0 ) a Y P L * ; ? . * 2 . 3 0 1 0 5 GO 7P 501 0105 505 CALL RANOU( IX,YEL) 0107 ««P^PTrI ,J-|.40) =vFL*2.3+4.3 0109 * 0 l CPMTIMijP 0109 OP 311 I*MV,.N 0113 CALL PANn«j{ r x . v E L ) -^ l l l . K3YFL*XT+l.O 0112 OP s i l J = l , 7 0 1 1 3 5 1 1 MPAOTf r ,J><.01 a'«OAPTr<,J*40» 0 1 1 4 'OT^T 267,M,NO'* 0 1 1 5 PPt.VT 5 2 0115 PR PIT 259 0117 OP 250 Ia l ,M 0113 2 5 0 PPT^J^ . a 2 , r , ( M P A P T ( t , J ) ,J=L, . \ 'OMl ) 0119 PPTMT ?66 0120 PPPIT 305

C C CALL TWE SU«PPJTTMP THAT CP'""JT?S « T v i L \ S [ T Y 1*155* {- TMTFUM^ 3P SET i;o ?. ELAT=0 ZH^PHCJ^." I^TICS C

0121 CALL PC4SI" ( '^P* .PT, . i ) O l?2 00 265 rLaM(JMl,.MU><2t.MiJ»«3 0123 NUMG* IL 0124 I J a l L - l 0 1 2 5 521 IK'O 01 25 0127 0125 012^ 0130 C l ? l '^l?2 0133 0 1 3 4 0135 01 ?5 0137 0133 0139 0140 0141 0142 0143

30 31

CALL r^UTMf 0 , X T I ) CALL "IMOEX CALL T5TEO CALL CP'JT'M l , « T l ) XlaO X?aO VaO.O on Z7 IaL,^"»*«G MM-.Mpy, f l )

rp(vo iG(n .GT. i ) ~r X a l . 3 GC ' "' 3 1 XaMM«(MN-U / ? YaMV«(^'-*'N) XHTsMJ r ) /X 5fL i«*L( ! ) / Y XlaXMl+.Xt X 2 a X 2 * X L l

rr"

Page 178: Annrovp - TDL

166

FOPT^AM IV ' LEVEL 21 MA!' 0^r= s no2'* 014'. Oli5 014*. 0147 014a 0149 0150 0151 0152 0153 0154 015«? 0155 0157 0155 015? 0160 0161 0162 0163 0154 0165 ' 166 0167 0168

0169 01 "'O 0171 0172 0173

O f 4 0175

» 7

265 13 " 3

39 43 52 53 54

52 154 15'' 1.00

20«! 251

266 267

269 273 305 269 305

VaV+'XHl XaNlJMr, V ! * V / X x i»x i /x X2-X2/X

V l « X I OTTMCalc PT IvE«e OP I SIJ K cn»'ri*Mj pcoMi^T( FPP'-'A 7{

FCR*«AT( PPOMAT{ Frc«ATf FPR^AT( PPPMATf FnpMAT( FPO'UTf pnPMATf FCP'^ATf

PHQMAT FPP"MT( FCP'^ATf

i r 6 , « 3AP FCR»«AT( FPPMATf FPR»«AT( FOR^ATf cnpM^Tf

1 6 X , » A 0 0 ST"P P*!0

- X L l

Tl TIMC

4 , T L e ^x,r

2 X , ' 20Y, 2 X / / lOX, l i l t IX,I 30X, OCX, 30X, iM,r {2X , I H U 2 6 * , TS A 6 X , 4 * A 40F2 I X , ' t *

I r • f F l 2 . ^ )

/lO-^.O • v i , A , c : \ ' « , p . ^ r ' « E , j j j

5 . ? X , 2 0 ( 1 2 , I X ) ) 1 2 X , 1 5 ) V » « , F 1 2 . ^ , ' ' - « ' • , = 1 2 . - , • • AFTEP • , ! ' » » • rTEPAriOM«:») / / / / / / / ) • A0nITI0^51L « OF - v c s a ' . T i : ) P l l . 5 , 2 f = l 5 . 0 , 5 X , F l « ? . 3 , f l O ) 5 , 4 7 1 2 ) •T!4P SQUJriClN CnV'/EPGEn TO \H ^:>Tl'*U'* • ) I 7MF S^LUTTH*} WAS ''PT C0»'\":?3F'» » ) • ^F= Ct' l^H IN vexT Si- .T ' ) 1 0 , 1 ? 0 A I ) 4 3 F 3 . 0 )

•T-HIS IS A • , r 6 , « ''*'?T« \.vn • , NO • , r 6 , « ^AC-»I»JFS "POBLE^ M 0 1 3 ) LG^P ITH'^ 5IJW Tf-tPat , p | 3 , r , i S r : 2 \ ' 0 S » ) . 3 ) PAflTS • , 4 0 X , ' A T P T 3 U T P 5 M nc rpn i jas • , 5X, •-» • , U X, » A"T 4 / C ' f '•0<?T • , t » X , M L G TIMC (SEC) • , • TTFISTIO' IS •)

Page 179: Annrovp - TDL

167

FPPT^AM IV Z LEV^L 21 PI^IDEx OATr a »1024

0001 0002 0 0 0 3

0 0 0 4 0 0 0 5 0006

0007 0009 00 OO

oon 0011 0 0 1 2 0 0 1 3 0 0 1 4 3 0 1 5 0015 0017 0013

r r r

SUPOOIITIME opUOEx OI"«FNSinM I P O S ( 1 2 0 ) C 0 M M C N / 5 L < l / S ! M F P ( l 3 0 , 1 3 0 ) , X L ( 7 0 ) , ^ f 7 0 ) ,

2 V L ( 1 3 0 ) , M P I 5 I 7 0 ) , I 9 U F 4 ( 1 3 0 ) . ' P & P T S { 7 0 , l 2 3 > f .MfWIMGf I C B J , ITE^ , « 0 V E 5 , J J J , ZZ

I'^TE3B» PARTS rm 50 I « l , * l

60 I P ' J F A ( n « 0

Cj^vo'jrp I»"l Tt \L POPPPPP'JCE IM*^**

3 E ^ T a i O 0 . o X « ( M - 2 ) * 2 OP 1 L a l , M SU*< 1*0 .0 OG 3 ! = l , M I F ( L - I ) 4 , 3 , 4

4 SU»*«0.0 00 2 J « l f N

7 SU«aS'JM*SIMFP(Lf J ) * S I « P R ( I» J) su^asi^M-sr'p" n . , I ) *2 .o 5 i r « S I ^ F P { L , I )-S'JM/X

su'^i'SU'^i+s r^p= f L . r ) SB^P^H FPP PA5TS WITH M I M r ' J v poce«=Mr.S TMOE*

C r

0019 00 20 0021 0022 0023 00 24 0025

0025 00 27 00 2S 0029 00 30 0031 00-^2 ' 0 33' ' 0 34 00 •*5

0037 0035

003<» 0O40 0 0 4 1

0042

c c c C

I F ( 3 E S T - S U " * ) 3 , 3 , 7 t 71 LRESTaL

l E E S T a l BESTaSlJ^

^ CONTINUE VLlL)«SIJ»»l

I CONTIMUE

FOR** F IRST TWO IVOICATOR GROUPS

N P I G ( l ) « l . N P I G t 2 ) * l 0 4 R T S ( ! , 1 ) « L 3 5 S T P A R T S ( 2 f l ) « I 5 6 S T t e ' j F A ( L 5 E 5 T ) a l i a ! J F A { I 5 E S T ) a ? <CUNTa? ».i5RU3 = 2 • - • ( l )aO.O W ( 2 ) » 0 . 0 XL( l ) a V L ( L 5 ' = S T ) X L ( 2 ) = V L ( I « F S T ) IFf MGPUO.FQ.M'T'GJG? TO

C C C C

aO

n 7

rpMPUTP THE PPEFEREMPE OF Pi'^T Tp'c'JPP?NT PIOICATPR GRCM'

•^FSTalOO.O OP 70 I = l » ^ I F ( I ? M F A < I ) . G T . 3 ) G 0 t o ' 0

SL"* l '0 .0

Page 180: Annrovp - TDL

168

PfwnAM IV Z LEVEL 21 DIMPFX OATC a » 1 0 2 4

0043 0044 0045 0 0 4 6 0047 004? 0049 0050 o r S I

0052 0 0 5 3 00 54 0055 0 0 5 6 •00^7 00 5 j 00 59 " 0 6 3

0 0 6 1 0062 0063 0 0 6 4 0065 0 0 5 6 0057 0065 ao6«> 0070 0 0 7 1 00 72 007-5 0074 0075 0076 00 7T 0079 0079 0050 0 0 « l 00«»2 0P<^3 nc\o(t

00*^5 00«5 /^naij OOP? 00 50

00 90 00-51

72

25

70

r

r r c 7

a

l l

<3 10

r

r

OP 72 x«i,Mf?R!jo *P«»4PTSCK,l) su*»»vLr n • V I . ( v » ! } - s i ^ ^ ^ ( i , > r ' ) * 2 . a SL'*««SI'-1FRI I ,NNJ-SU«/X SU»l«SUMl*Stlw IF C "IE ST-SUM I ) 7 0 , 7 0 , 25 IBESTrl

9EST»SUM1 rprTTNtiE

FPRM PE>*AIMI»«G IVPICATpo GRP'J«*5

»IGPUPaMGRU«»*l PAP TS (MGP'J" , 1 ) a 19 FS T XCUflT«<OU*!T-»-i IRUFA(IREST)a^f r ,RUP N P I G I N G R U " ) ' ! HrNGRUP ) « 0 . 0 X L ( M G P U P ) a V L ( I 3 F 5 T ) IF(NGPU<».LT.MU^G)Gn TO ^0 IP(XPU»!T .EO.M)Gn T? ^ l

ASSIGM PPVATMIVI PARTS T^ 0'O'Jf»S

CPMTlMtJF a F S T a - l O O . O OP 10 1 = 1,M rF ( I 3 U F A C n .GT.0)G.3 TO 10 OP q <al,MJ^«G SU** 4 - 0 . 0 S»J'»laHf K) SU'«2aXL(K) NN*»iPTG(K) XlaWN X7a*J-* !M- l 00 S J a U M M M l a P A P T S { X , J ) SUM 2a SU « 2 - 5 I « P9 ( I , •! I ) SU^4»SU^4*S t ' *PR( I , M l ) SU^ lsS 'J * * ! *? I ^ ^ " I I f >.' I ) X 4 a V L ( I ) -s ; j '<4 SU'*2aSU'*2+X4 X a ( M * | * l )«NV./-> Y a ( M N * l )*(»t-(MM-».l n SUM3*SU ' *4 /X l - x< . /X2 I F { 9 E S T - ^ U ' * 3 » l l » 9 , ^ 0CSTaSU«3 I ' E S T a ! IGP-*^ X H l a S U « l XH?aSU'^2 CONTINUE CCNTIVUS

APO OAPT v t T H ulGHEST IMPPX m

N'^JaMOl'^ (IG"» ) * l

NPtG( IGR)-=NM

• ) 0

Page 181: Annrovp - TDL

i M-'

169

cnoTRAM IV G LEV«"L 21 "IMOEX '^iT" s 91024

00*»2 OOO? 0094 0095 0096 0097 0095 0099 01 10 0101 0132 0103 0104 0135 01«5 010? 0109 0109 0113 0111 0112 0113 0 1 1 4 0115

51

703

703

P f tR ' 'S ( IGR, * lM)a ioEST I R U F A ( I R F S T ) a I 3 P KCUMTaKCUNT+l H f l G P J a X H l XL( IGR)«XH2 I F { K 0 U N T . L T . M ) G 0 T*" " COMTpiuE X l a O . O X 2 a 0 . 0 XaMiJMQ

OP 700 Lal,N'.r«G NMaMPIG(L) I F f M V . F O . D G O TO 703 X X a N N « ( N N - l ) X X a X X / 2 . 0 X l a H { L ) / X X * ' X l YYa-MM*(N-NM) X 2 » X L ( L ) / y v * x 2 CPNTT*!'fE X l a X l / X X?aX2/X ZZaXl RETURN pMr)

Page 182: Annrovp - TDL

170

FPOTRAM IV G LSVFl 21 TSTEP OATE a a 1024

0001

0002

0O03 0C04 0005 0 0 0 6 0007 O' Ofl 0 0 0 «

0013 0 0 1 1 0012 0 0 1 3 0 0 1 4 00 15 0015 0 0 1 7

0019 001? 00 20 00 21 0022 0 0 2 3 0 0 2 4 0025 00 25 0027 0025 0029 0030 0031 0032 0 0 3 3 00 34 0035 0036 00 37 0035 00 39 nnt^-) 0 0 4 1 0042

C C r c c

43

c C c

36

3 1

32 33

C r r

SUBROUTINE TSTEP

THIS S•JRPC?JTI^'E TESTS THF O 'TTMALfTy pc SIL-JTI^NS ANO •'AKFS NECESSARY

COMM«M/ 9LK1 / S T'PR ( 1 3 0 , 1 30 ) , XL ( ' 0 ) , H( 7 0 ) , 2VL f 1 3 0 ) , M P I G C 7 0 ) , I 5 U F a ( l 3 0 ) , ' « » A P T S ( " ' 0 , 1 2 3 ) , N , N U * * G , r P 5 J , I ' ' : 9 , - 0 V « < > , J J J , ! ?

ITCPaO JJaO J J J « 0 MPVES«0 IlaO J J J « J J J ^ l

rPMOgTF MPJ ^Al (IF5 OF -i AMO L 'JiTMPUT PA^T I. Jf

00 30 I a l , M M l a i g i j F A d ) HlaH(Ml ) »WaM«»rGlNl) I F f M N . E O . D G O ' 0 30 XlaNN*( M-M-l ) / 2 X2aM^J*( »i-NM) H C L O l a M { N l ) / X l x n L 0 1 » X L ( M l ) / X 2 S U ^ l a H f M l ) SUM2'XL (MlJ SUM4aO,0 DP 31 KaU. - l v M2aPAPTSf'^' l »K) I F ( M 2 ..ME. I ) GO TP 36 <KaK GO TO 3 1 S U ' * l s S U M t - S r M P P ( T , " ! 2 ) SU'*2«SU'*2*5IMFP ( I , V 2 ) SU«4aSUM4*S I'^FP ( I ,M2) CPMTINUF S U * « 4 » V L ( I ) - 5 U ^ 4 SU*'2«SUM?-SUM4 X H l ' S U ^ l XLl«SUM2 H2aSUf ' t K a f ' " N - l ) * ( V * ' - 2 ) / 2 Ya(* !M-1 ) * ( M - W » * - l ) I K r M M . G T . 2 ) G t : TP 32 uMPWl«0 . 0 GP TO *«*» HNPWlaSU'n /X XNEWl»SU.»'2/V

CO.MPUTF MFW 7ALIJF5 PP H ^^0 TO ENTER »AOT I IN GPCU" J

, AFT=5 ATTE'^'TIM',

0043 0044

!^FSTa0.3 on 34 Jal ,MU'^G

Page 183: Annrovp - TDL

171

ppO TR AM

0045 0046 0 0 4 7 0045 0 0 4 9 0050 0 0 5 1 0052 00 53 0054 0055 0055 0057 0055 0059 C060 0 0 6 1 0062 0063 00 64 0063 0066 0OA7 00 6-? 00 69 <so70 00 71

00 • ' I 0073 00 74 0075 0075 0077 0075 0079 O C O 0051 0032 0033 00 «4 00fl5 CO 56 •^0P7 C0*»5 00'»o 0000 0 0 9 1

0092 0093

IV G L^VEL

5

1}

7

a

c

1

2 49 • ' 5

37

3'* 30

C

c

c

c r

21 TSTE"*

H3«H(JJ I F ( I R U F A d l .EO.JJGC Tn 34 SU»«l»H{.II SU^2«XL(J ) SUW3«0,0 NN«NPIG(J ) IF (>!M-1 1 5 , 5 , 6 H " L 0 2 « 0 . 0 GO TP 7 Xla»'M<i( V»J-1 ) / ? H n L 0 2 » H < J ) / X 1 X2«MN*r N-MM) X P L 0 2 » X L ( J ) / X 2 OP 5 < a l , M \ ' N4«PARTS( J *K) SU»«laSUMl*S I*«PR( I , M 4 ) SUM3aSUM3-«-S IMPP J I ,.M4) SUM2aSU^2-S IMF" { I , ' : 4 ) SUM3aVL(n-S'JM-» SU** 2a SU "2<• S!J*'' Xs(MN*^l )**n«/? Y a ( N N * l ) * r : - N N - l ) XH2aSUMl Xl2aSU'»2 H4aSU*n H*»EW2aSUMl/X XNEW2aS'JM2/Y

COMPUTE THE "^ATE CF CHAMGE P« P A ' T I T I

GC TO ( 1 , 2 ) , r n « j 0FLTA=H»lFWl-HPL0l**'MFW2-Hnt.02 GO TO 4 9 0ELTAaX0LDl-XMEWl+tpLO2-*Mcv/7 IFf 0 F L T A ) 3 4 , 3 4 , 3 5 I F ( ? E S T - 0 E L T A ) 3 " ' , 3 4 , 3 ' » IGPaJ SESTaOELTA XH3aXH2 X L 3 * X l ? I I « l

XLaKK V laXWl

V2aXLl l?ESTa l CP*'TP'nE CPMTIMUE IFf I I .EQ.O)PETijPM MlatRUPA( I 3 E 3 T ) <Ka<L

REASSIGN I T^ G^'^'P - I ^ H MAXTMJ-4

I F n « . E g . * l " I G ( M l ) ) G C T- 47

M 7 a V P I G { M l ) - l

PE-PVF PART I PPP** ^Ln ^ ' O ' ^

O^TE a a iO?4

RATE TP : - (ANG'

0094 OP *6 TLarx ,M7

Page 184: Annrovp - TDL

172

FPPTRAM IV G LFV^L 21 T ^ T - P ' ^ ' ' ' OATE a 3 13^4

°^ll , P - ^ f t T S f M l . I D a P A P T S f M l . l L * ! ) 0095 45 CPMTIN?UF ''0*»^ * ' N P I G ( v i l * M P I G f N l ) - l

C APD PAPT T TO MFW GP'^U'*

0005 I « U F A ( I 5 E S T ) a r G P '^''''9 MJaMPIGdGR ) * ' ' ' l ' ^ ' ' MOIG(IG'»)aMJ - 1 5 1 PARTS( IGP,MJ) =r3FST '^102 MPVESaM'^VPS*!

C r r C

0103 HJIGR)aXH3 0104 X L ( I G R ) a X L 3 0 1 0 5 H ( N l ) a V l 0106 X L ( M l ) s V 2 0 1 0 7 ITFRal Ol''*^ I F C ' P V E S . G T . O ) ' ; n TP 43 0109 40 OFTURM 0110 EMO

U*»OATE VALJPS ""P AFFPCT = 0 l^O-.J^S

Page 185: Annrovp - TDL

173

pppTPAV IV z LFVEL 21 PC AS I " OAT= a ?1024

0001 0 0 0 2 0003

0004 0005 0005 0307

00 09 0010 00 l l 0012 0 0 1 3 0 0 1 4 0015 0016 0 0 1 7 0015 nn i^

0 0 2 0 00 21 0022 0023 0024 00 25

SURP?UTIME PnASlM(MP4PT,w) OI>»ENSnM '«P4RT{13O,47),Wf«50) CPMMOM/ 5LK1 /S I"FP ( I 30 , 1 30 J , XL d O ) , H( "'O),

2 V L ( 1 3 0 ) , M P I G ( 7 0 ) , I P U « : M 1 3 0 ) ,

*«? AOTS( 7 0 , 1 2 0 ) , .M, MUMG , 103 J , I ^^R , ' ' 'nvP5,J J J , r ? V « 0 , 0 7 « M « ( M . I ) T-^TALaO.O OP ^07 K a l , 5

^ « 7 YaV*.!-, f |<+4P) YaY*2 . 0 N l a M - l OP 505 I a l , M l I l -«I*1 OP 50P J » I l , N S«l«a0.0 OP 50n Kal,"?

50O I F C I P A ' T d , < * 4 0 ) . = 0 . ' " A P T f J.Ki-^O) ) <;!J' aS J*«l-V( X*40 ) S r ' F R ( I , J ) a S ' j « / f Y - S l J M ) TPTALaTCTAL*Sr 'FR(r , J ) * ? . 0 S I ^ P 9 » J , I ) a S r ' P ' ( ! , J )

50'« COMTIV'JE TPTAL«TPTAL/Z pppiT 9 0 2 , r n T A L

•^02 F 0 t ? M A T ( 2 0 X , F l 5 . 6 ) QCTIJP M

EMP

Page 186: Annrovp - TDL

174

«"?TRAW

0001 0002 0003 0004 0005 0005 0007 0005 OO' o

IV 1 ; LEVPL

1 2

21 PA MP'J

SU9P"'JTIME 'ANOUf IX,VFL) I Y s I X « l 2 2 0 7 0 3 l 2 5 I F ( r Y ) l , 2 , 2 IY« IY*2 14745364 7+I

YEL«IY Y « : L « Y E L * 0 . 46566 130-9 IXalY RpTJJRM END

'^^: a '1024.

Page 187: Annrovp - TDL

APPENDIX B

OPITZ PART CLASSIFICATION SYSTEM

175

ijiik

Page 188: Annrovp - TDL

176 APPEmiX B

OPITZ PART CLASSIFICATION SYSTEM

The shape, size, processing features, material, tol­

erance, etc., of a part are described by a nine-digit clas­

sification number system. The first five digits are the

main form while the last four are supplementary (optional).

Figure 7 shows the general layout of the Opitz system

while Figure 8 represents the form as used in this study.

The first digit designates whether a part is rota­

tional or nonrotational and the overall size. If a part

classification code has zero in digit one it is rotational

and its length to diameter ratio (L/D) is less than or

eqtial to .5. A value of 6 on the first digit indicates

nonrotational with the longest to shorter dimensions less

than or equal 3 (A/B £ 3) . The meaning of other digits

are shown in Figure 7. Each digit can assume values

ranging between 0 and 9. A number within a digit has a

unique meaning as shown in Figure 8. Higher values denote

complexity on all the digits except digit one.

For this study only two of the supplementary codes

are appended to the main form code (see Figure 8). The

values from 0 to 9 in digit 6 indicate increasing order of

accuracy requirement. Those in digit 7 show variation in

material type. The remaining digits are as Opitz's origi­

nal system.

Page 189: Annrovp - TDL

177

<

O ''•J

1 A : Y s . r : : T

i 5 " ^ i « 2 . ' ' » ' • » ' ! iO ! « » - ? •^>'.2tvC

j * " ' S S A ' ' ' »

1

1, S N C ' I . * . J ' ^ ' C

s

c '1-i 'd o o N 4J

a. o <+4 o

o CO

0)

•ri

fl4

Page 190: Annrovp - TDL

178

*<i^i rsr ia iy^

TSAa-^ xS'szT^zoY

'Ji s

s ^

, "* •--! ^ s ^ '^ < <

I ^ ^ — Z I X -w Vt

< • ^

= ' - ^ 1 .- 1 <• i .- 1 ^

JI

^

i" ^ -•

> > ^ >

• s i i -

— a >

1— - i - • 1

i^iiSI'l^ 1 i

1 -.-••'1 -->C inoct-." ! •--

1 — ^ 1 i * ^ p? — t " ^ ^ ~

s ^ I -j 1 r ^ "

• 2 ; ^ ' « — I ^ C " ^

..: 2 {Z<J. 1 •!-? ~ ^ 1 ^ — ' • • * ^

a 1 ' 1 - 1

C ^ I ^ L— '^ 1 — C

— — ", j " i •t' ~ * r : ' s > « ' x — ' ^ - " - — ' - t 1 :; >" • i - .?tj « ' 1 * w 1 •«. C I •'•*»

: . ; i ; < : ' — > | 4 C i < N ^ { x : 1 . . Z Z ^ ' 1 ^ ^ Z I li^ '. '^ [lim — 1 S S

' — £ — i : ' « 1 —'.r C ' — 1 :»i i .-^ ) < • 1 . - , ^

[ 1 1 1 1 1

«

3 '•» 3

^ 2 ^

C • • [ t i — 5 ^

! • < t^ 7* } S ^^ ?!? . ^ —^

i r r < •= ^ **" "T

> - :.:

^ ^ -2 "^

^ . .« X -J

^

s TJ 5 ;2

— -^ i -

1 }

# ^ 1 2 a 1 3 '3 c ^ ' ^ ^ — > 1 -5 S

.• c ; ^ u ^ C I '•^ ^ ? ^ 1 tj 3 CJ ' C ^

^ * 1 ^ ^

— — j 5 i 1

:7

"^ 5

^ 5

: « c ^ U 1 — 5

^ 1 V —

j ? -^ : • —

— s 1 v: l-j-ccsc 13! 3s^3^»iz : -" i'3-cuT 5Gr3 act'i ss^r t i-iz-3..3<

— 1 — 1 .->j 1 -^ 1 » r ! . - 1 , ;

- 1 = 1 :

i

H

M

^

T. T . TT

- 1 -1

' " I «- '*C --"•.'*.

• • t

3

^

•j, 1

r j

l^l\Vr% - ^ — §'

- £ 1 -:'"" 1 r ^ '• l ^ • ^! ^ • » ' - W - ^ " ' M

2 1 '.4 C ^ 1 C

— !:;:>— 1 - - 1 r ' r. '

^ C

^ ^ ^ • ^ -* c * * * ^ ' ^m

' — 1 ^ C ! ^ * 3 ^ 1 ^ ^ ^ «'-^

« • M 1 «• w ' • • ^ W « *

'"• .^ 1 ^ '^ ' r.-- i _r "* .c ^ ^—^

' " ;£ = *» i i r

• CQ

'a o CJ

Cd

r-l

a a 3 CO

0 15 i j

4= 4J •"^ ^ ^

' e CL) 4-J cn > N

CO

&0 •H

O • u

I

1 . 5... i

' • 1

^1^ ^ 1 * '

^Vi^Z S« • - - . — — 3 S j -T f»i ;ii. 1 - i - i 1 ^

1

• * * r f « ' ^ ^

•-

"2

'J

«w I 1

s •••» 1 i -

3 1 j J1 j .

1 1 ^ ^> 1

.^ ^ ^ — >

1 '

••ZZZa:ZX 10, 2 —

• S d C T 3 0 " : 3 9Cr.\ 3 3 C "

t

' ^ I ^

! H 3

1 " 1 ^ ^ 1 ^ • •

i — i^ — "z"^

— :C r

i i^ •w •H

1 ^ i 1

' CO 1 : CJ J J-l

3

1 i ! 1 i 1 ! 1 I - L 1

! - ! \C :- I

1 •" 1

j « • • • 1 Wl

* i ' - —

^ ! i 5 " 1 Wl 1 "./ i 1 S 1 . < 1 > ^

•w ^ i I

.' f:^ "Z ^ \ r 1M 'N •" " ~ •" ' ^ •" 1 — I s ' —

> 1=:: :H:: i '' << i c i - i ^ i - l ^ l - i -1 — - n r r - •*-••" ^ ' ^ 1

i ' ' 1 ! i ! 1 i

1 1 •; i — 1 .••*• - " ^ ^

• = 1 -c c ; r

j ^ ; = 1 "

i bO I -H ! fi^

t

1 ! 1

1

i 1

Page 191: Annrovp - TDL

:;>'• 'T^'.fj'^t^

APPENDIX C

TABLES OF GROUP HOMOGENEITY (H) , NUMBER OF ADDITIONAL MACHINES (A) AGAINST NUMBER OF GROUPS

179

Page 192: Annrovp - TDL

180

( i ) Group Homogeneity (H) and Nuniber of Additional Machines (A): n * 60; Ntmber of Each Machine Type i s Uniformly Dis­tributed Between 1 and 3.

N

2

4

6

8

10

12

14

16

18

20

22

24

26

78

30

32

34

36

38

40

42

44

46

48

50

52

54

"~r^: 1 PFA

H

.1972

.1764

.2372

.2128

.2617

.2660

.3275

.3388

.3350

.3261

.3404

.3595

.3608

.3565

.3042

.2717

.2604

.2081

.1401

.1020

.0726

.0383

.0333

.0263

.0301

.0237

.0164

A

0

0

1

3

3

3

3

6

7

9

16

22

28

40

44

48

50

57

63

67

71

77

87

95

103

111

117

n « . T -*" PFA

H

.1590

.1679

.1262

.1467

.2018

.1827

.1843

.1915

.1825

.1615

.1585

.1569

.1498

.1663

.2287

.1366

.1240

.1104

.0855

.0702

.0681

.0618

.0547

.0425

.0320

.0293

.0217

A

0

0

1

2

7

11

15

20

27

30

39

41

47

56

63

64

66

70

73

78

83

87

91

99

106

115

124

fi = , t^^ PCA

H .6005

.8059

.5827

.7561

.8535

.9200

.9420

.9693

.9859

.9873

.9884

.9798

.9813

.9801

.9692

.8750

.7647

.6666

.5789

,5000

.4285

.3636

.3043

.2500

.2000

.1538

.1111

A 0

3

29

47

56

65

80

83

88

91

101

111

115

123

129

130

130

132

135

135

135

135

135

135

139

138

138

fi = . ,6 PCA

H .4133

.7939

.7251

.8558

.8837

.9327

.9140

.9249

.9312

.8794

.8811

.8612

.8146

.7259

.7058

.6635

.5266

.4451

.4023

.3554

.3623

.3104

.2665

.2137

.1822

.1367

.1111

A 0

3

10

17

27

. 33

51

67

73

92

102

121

132

143

141

144

152

152

152

152

153

155

157

159

162 163

164

Page 193: Annrovp - TDL

181

( i i ) Group Homogeneity (H) and Number of Additional Machines (A): n = 80; Nuniber of Each Machine Type i s Uniformly Dis­tributed Between 1 and 3.

N

2

4

6

8

10

12

14

16

18

20

22

24

26

28

30

32

34

36

38

40

42

44

46

48

50

52

54

56

58

""^r^T: PFA

H

.1997

.1884

.1729

.1777

.1823

.1805

.1684

.1894

.2191

.2139

.2084

.1995

.1751

.1976

.2010

.1865

.1952

.1685

.1754

.1757

.1741

.1820

.1625

.1545

.1377

.1412

.1298

.1154

.0947

A

0

0

0

0

2

4

6

7

8

10

15

18

27

34

41

50

59

66

72

84

87

89

94

97

104

107

111

119

126

ii « . r-" PFA

H

.1568

.2168

.1331

.1764

.1627

.1867

.1788

.1840

.1913

.1887

.2028

.1872

.1740

.1793

.1732

.1695

.1707

.2011

.1985

.2058

.1758

.1623

.1519

.1450

. 1359

.1149

.1058

.0828

.0800

A

0

1

1

2

5

7

10

15

14

18

30

45

55

65

87

94

104

107

112

123

130

134

137

136

139

143

146

154

159

46 *"* J rT"-^ PCA

H

.2860

.6649

.8033

.8531

.8897

.8769

.9175

.9561

.9621

.9682

.9712

.9736

.9798

.9812

.9890

.9596

.9764

.9628

.8952

.8351

.7264

.7175

.6676

.6416

.5760

.5153

.4703

.4285

.3793

A

0

4

24

44

61

77

93

115

128

138

151

158

162

167

169

177

176

189

194

191

199

200

192

203

206

216

213

213

213

"ir^ • 0 PCA

H

.2301

.6589

.7718

.7618

.7621

.7190

.8316

.8253

.8453

.8888

.8987

.8522

.8782

,8777

.8709

.8353

.8144

.8036

.7614

.6611

.6916

.5315

.5151

.4809

.4816

.4246

.3837

.3343

.3243

A

0

2

10

23

42

60

72

88

95

101

123

144

150

171

185

196

207

221

224

236

240

240

240

234

236

236

237

237

242

Page 194: Annrovp - TDL

182

( i i i ) Group Homogeneity (H) and Ntmiber of Addit ional Machines (A): n = 100; Ntimber of Each Machine Type i s Uniformly Dis t r i ­buted Between 1 and 3 .

N

2

4

6

8

10

12

14

16

18

20

22

24

26

28

30

32

34

36

38

40

42

44

46

48

50

52

54

56

58

u u ^" • ^

PFA

H

.2461

.1899

.1946

.1888

.1901

.1839

.1858

.1644

.1689

.1641

.1691

.1781

.2059

.2167

.2157

.2107

.2253

.2150

.2036

.2069

.2126

.2011

.1925

.1911

.1958

.1905

.1730

.1515

.1441

\

A

0

0

1

2

4

4

5

8

10

14

17

20

22

30

34

44

54

63

75

84

92

98

102

118

131

131

135

139

145

U = .1 PFA

H

.1798

.1833

.1849 '

.1981

.2026

.1874

.1922

.2111

.1856

.2325

.2248

.2001

.2075

.2047

.1998

.1984

.1958

.1984

.1900

.1996

.1932

.1780

.1700

.1860

.1045

.1510

.1356

.1229

.1197

b

A

0

0

1

1

3

3

7

9

9

31

17

30

47

50

60

66

73

84

100

115

123

147

157

161

182

185

190

192

193

li - . J PCA

H

.2466

.5155

.8656

.9020

.9230

.8953

.9454

.9533

.9586

.9621

.9658

.9691

.9714

.9754

.9770

.9788

.9791

.9802

.9812

.9711

.9433

.9379

.9228

.9055

.8774

.7965

.6971

.6908

.6508

A

0

7

12

39

69

103

118

141

153

171

183

197

202

212

216

232

247

256

257

261

275

278

288

284

283

285

284

289

286

u = . ,b PCA

H

.5884

.6632

.8625

.8974

.9186

.8914

.9424

.9226

.8982

.8785

.8895

.8989

.8669

.8462

.8420

.8509

.8729

.8507

.8518

.8292

.8074

.8067

.7401

.6868

.5784

.5352

.5591

.5272

.4748

A

0

7

20

42

53

79

98

113

135

140

143

159

175

187

206

225

235

254

272

270

296

310

318

325

336

343

337

339

342

Page 195: Annrovp - TDL

183

( iv ) Group Homogeneity (H) and Ntmiber of Addit ional Machines (A): n = 120; Number of Each Machine Type i s Uniformly Dis­t r i b u t e d Between 1 and 3.

N

2

4

6

8

10

12

14

16

18

20

22

24

26

28

30

32

34

36

38

40

42

44

46

48

50

52

54

56 1? o

58

!.Z = . J PFA

H

.2758

.2215

.2098

.1830

.2100

.2022

.1925

.1868

.1784

.1706

.1786

.1941

.1904

.1949

.1917

.1887

.1848

.1851

.1840

.1840

.1816

.1857

.1877

.1820

.1900

.1808

• .1881

.1890

.1896

1

A

0

0

1

4

4

9

14

17

18

23

25

24

29

32

38

38

44

52

55

62

74

87

88

101

123

133

155

177

187

il = A PFA

H

.1258

.1625

.1715

.1421

.1580

.1626

.1499

.1517

.1446

.1380

.1526

.1587

.1595

.1338

.1304

.1587

.1642

.1701

.1764

.1804

.1802

.1777

.1720

.1713

.1627

.1614

.1665

.1637

.1591

3

A

0

0

0

4

4

6

9

14

19

24

27

34

39

47

54

64

80

84

99

107

120

133

141

151

177

187

206

214

237

il — . 3 PCA

H

.2600

.5818

.8673

.9011

.9214

.9347

.9113

.9265

.9586

.9630

.9373

.9718

.9626

.9765

.9605

.9627

.9649

.9682

.9707

.9600

.9561

.9576

.9679

.9735

.9532

.9642

.9362

.9142

.8689

A

0

14

17

42

75

96

133

161

186

209

234

267

294

281

304

304

316

335

348

347

367

381 364

371

390

387

384

392

402

ii = . b PCA

H

.5732

.7900

.7818

.7802

.7349

.8106

.8752

.8037

.8187

.8392

.8317

.8493

.8818

.8705

.9005

.8956

.8930

.8710

.8884

.8790

.8647

.8524

.8457

.8468

.8046

.7891

.7409

.7309

.7714

A

0

2

17

23

47

57

73

93

107

121

133

151

179

183

221

203

246

253

257

262 ^H /^ /^

282

311

322

332

359

370

373

392 401

Page 196: Annrovp - TDL

184

(V) Number of Additional Machines: Type i s 1.

Ntmiber of Each Machine

N

2

4

6

8

10

12

14

16

18

20

22

24

26

28

30

32

34

36

38 W W

40

42

44

46

n =

a- .3 8

11

16

19

23

27

39

40

45

46

67

74

91

101

107

110

119

120

126

131

135

142

154

60 a-.6

2

9

21

36

47

56

59

73

75

74

86

102

112

121

127

132

133

137

138

143

152

157

162

n »

«=.J

11

8

20

27

32

37

40

49

53

62

64

71

77

86

93

103

117

123

134

144

146

160

165

80

K «.b

7

19

29

37

49

49

66

78

94

103

121

129

131

141

160

167

186

200

203

211

207

210

214

n =

M =.J

5

12

21

24

29

38

48

55

60

68

78

85

91

97

102

122

123

136

146

157

170

176

187

100

u^,b 3

15

23

48

38

45

55

76

86

92

102

124

140

133

150

167

189

191

208

215

229

246

267

n =

u-.:i 2

14

22

39

48

51

48

55

60

68

83

91

101

104

112

119

129

144

151

171

175

187

204

120

li=.b 3

13

22

30

43

60

66

79

96

106

115

124

140

148

161

172

173

188

200

224

239

252

270

Page 197: Annrovp - TDL

185

(v i ) So lu t ion to Case Study Problems.

No. of Burb idge ' s Prob. Purcheck 's Prob. Harworth's Prob n = 43 , M = 16 n = 82, M = 19 n = 85

PFA PFA PCA

A A H

2

4

6

8

10

12

14

16

18

20

22

24

26

28

30

32

34

36

38

40

42

44

46

2

9

11

13

15

19

22

26

32

36

42

47

50

56

64

70

77

83

92

1

3

7

10

15

19

25

35

52

57

63

69

75

81

89

99

108

119

126

134

148

156

164

.3094

.6565

.8162

.8194

.8558

.8526

.8496

.8472

.8456

.8447

.8438

.8427

.8328

.8209

.7912

.7555

.7395

.7075

.7672

.7656

.7060

.6926

.6583

Page 198: Annrovp - TDL

.d i^.