genetic algorithms in pipeline optimization

14
GENETIC ALGORITHMS IN PIPELINE OPTIMIZATION By David E. Goldberg, 1 M. ASCE, and Chie Hsiung Kuo 2 ABSTRACT: The application of a genetic algorithm to the steady state optimi- zation of a serial liquid pipeline is considered. Genetic algorithms are search procedures based upon the mechanics of natural genetics, combining a Dar- winian survival-of-the-fittest philosophy with a random yet structured infor- mation exchange among a population of artificial chromosomes. Computer re- sults show surprising speed as near-optimal results are obtained after examining a small fraction of the search space. The method is ready for application to more difficult optimization problems in civil engineering. INTRODUCTION Over the years, many methods have been applied to optimize both the design and operation of pipeline systems. Some methods, like dy- namic programming, require an unholy mixture of model and optimi- zation procedure, which thwarts the construction of modular programs. Other methods, such as calculus-based gradient techniques, require the construction or approximation of derivative information, and even then these methods can only hope to achieve local optima. As a result, there is still a need for optimization procedures that (1) are free from a par- ticular program structure, (2) require a minimum of auxiliary information to guide the search, and (3) have a more global perspective than many of the techniques in common usage. In this paper, an algorithm with these characteristics is examined in detail. This algorithm, called a ge- netic algorithm (GA), is based on the mechanics of natural genetics (Hol- land 1975). It searches through large spaces quickly even though it only requires payoff (objective function value) information. Furthermore, be- cause of the processing leverage associated with genetic algorithms, the method has a much more global orientation than many methods en- countered in engineering optimization practice (Goldberg 1983). These favorable characteristics of genetic algorithms have been theoretically in- vestigated in Holland's (1975) monograph. Empirical investigations by Hollstien (1971) and Dejong (1975) have demonstrated the efficacy of the technique in function optimization. Dejong's work, in particular, estab- lishes the genetic algorithm method as a robust—broadly applicable yet efficient—search technique as compared to several traditional schemes. Subsequent application of genetic algorithms to the search problems of pipeline engineering (Goldberg 1983), VLSI (very large scale integration) microchip layout (Smith and Davis 1985), structural optimization (Gold- berg and Samtani 1986), job shop scheduling (Davis 1985), medical im- age processing (Grefenstette and Fitzpatrick 1985), and machine learning 'Asst. Prof., Dept. of Engrg. Mech., The Univ. of Alabama, Tuscaloosa, AL 35487. 2 Grad. Research Asst., Dept. of Engrg. Mech., The Univ. of Alabama, Tus- caloosa, AL 35487. Note.—Discussion open until September 1, 1987. To extend the closing date one month, a written request must be filed with the ASCE Manager of Journals. The manuscript for this paper was submitted for review and possible publication on June 3, 1986. This paper is part of the Journal of Computing in Civil Engi- neering, Vol. 1, No. 2, April, 1987. ©ASCE, ISSN 0887-3801/87/0002-0128/$01.00. Paper No. 21436. 128 J. Comput. Civ. Eng. 1987.1:128-141. Downloaded from ascelibrary.org by University of Texas At Austin on 09/27/14. Copyright ASCE. For personal use only; all rights reserved.

Upload: chie-hsiung

Post on 07-Feb-2017

220 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Genetic Algorithms in Pipeline Optimization

GENETIC ALGORITHMS IN PIPELINE OPTIMIZATION

By David E. Goldberg,1 M. ASCE, and Chie Hsiung Kuo2

ABSTRACT: The application of a genetic algorithm to the steady state optimi­zation of a serial liquid pipeline is considered. Genetic algorithms are search procedures based upon the mechanics of natural genetics, combining a Dar­winian survival-of-the-fittest philosophy with a random yet structured infor­mation exchange among a population of artificial chromosomes. Computer re­sults show surprising speed as near-optimal results are obtained after examining a small fraction of the search space. The method is ready for application to more difficult optimization problems in civil engineering.

INTRODUCTION

Over the years, many methods have been applied to optimize both the design and operation of pipeline systems. Some methods, like dy­namic programming, require an unholy mixture of model and optimi­zation procedure, which thwarts the construction of modular programs. Other methods, such as calculus-based gradient techniques, require the construction or approximation of derivative information, and even then these methods can only hope to achieve local optima. As a result, there is still a need for optimization procedures that (1) are free from a par­ticular program structure, (2) require a minimum of auxiliary information to guide the search, and (3) have a more global perspective than many of the techniques in common usage. In this paper, an algorithm with these characteristics is examined in detail. This algorithm, called a ge­netic algorithm (GA), is based on the mechanics of natural genetics (Hol­land 1975). It searches through large spaces quickly even though it only requires payoff (objective function value) information. Furthermore, be­cause of the processing leverage associated with genetic algorithms, the method has a much more global orientation than many methods en­countered in engineering optimization practice (Goldberg 1983). These favorable characteristics of genetic algorithms have been theoretically in­vestigated in Holland's (1975) monograph. Empirical investigations by Hollstien (1971) and Dejong (1975) have demonstrated the efficacy of the technique in function optimization. Dejong's work, in particular, estab­lishes the genetic algorithm method as a robust—broadly applicable yet efficient—search technique as compared to several traditional schemes. Subsequent application of genetic algorithms to the search problems of pipeline engineering (Goldberg 1983), VLSI (very large scale integration) microchip layout (Smith and Davis 1985), structural optimization (Gold­berg and Samtani 1986), job shop scheduling (Davis 1985), medical im­age processing (Grefenstette and Fitzpatrick 1985), and machine learning

'Asst. Prof., Dept. of Engrg. Mech., The Univ. of Alabama, Tuscaloosa, AL 35487.

2Grad. Research Asst., Dept. of Engrg. Mech., The Univ. of Alabama, Tus­caloosa, AL 35487.

Note.—Discussion open until September 1, 1987. To extend the closing date one month, a written request must be filed with the ASCE Manager of Journals. The manuscript for this paper was submitted for review and possible publication on June 3, 1986. This paper is part of the Journal of Computing in Civil Engi­neering, Vol. 1, No. 2, April, 1987. ©ASCE, ISSN 0887-3801/87/0002-0128/$01.00. Paper No. 21436.

128

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 2: Genetic Algorithms in Pipeline Optimization

and artificial intelligence (Holland and Reitman 1978; Booker 1982; Gold­berg 1983; Wilson 1985) adds considerable evidence to the claim that genetic algorithms are broadly based. A recent conference proceedings (Grefenstette 1985) surveys current thinking and a recently completed bibliography (Goldberg and Thomas 1986) references important contri­butions to the literature of genetic algorithms.

In this paper, the pipeline problem considered is outlined. Next, the mechanics and power of effect of a simple genetic algorithm are exam­ined. Computer results in the pipeline problem show that the genetic algorithm obtains very near-optimal solutions after exploring a very small number of the operating alternatives. Pascal computer code excerpts are presented to provide detailed insight into the workings of this important technique.

PIPELINE OPTIMIZATION PROBLEM

Suppose there is a serial products pipeline consisting of 10 pipes and 10 pump stations with 4 pumps in series within each station, as depicted schematically in Fig. 1. The pipeline is subjected to maximum discharge pressure and maximum and minimum suction pressure constraints:

Ps, S PsmaXj (la)

Pst § Psmin,- (lb)

Pd{ § Pdmaxi (lc)

where i = 1, 2, . . . , 10. These constraints result from safety and con-tractural considerations. Our objective is to deliver a specified flow rate, Q0, starting from an initial upstream pressure, P0, and to maintain all pressures within specified levels using a minimum total power on the whole system:

40

min 2 HPjXj (2)

where HP = power consumed by pump; x = status of pump, i.e., x = 1 = pump on and x = 0 = pump off; and / = pump index subscript. For a known flow rate, steady state pressure loss in each pipe may be

SUPPLY

o

STATION

I 1 | | PIPE

4 PUMPS IN SERIES PER STATION

STATION

l | | [ PIPE

y S STATION

1 1

PTPF, 1, _ .. 1

V

PIPE

STATION

J 10

1 n

B DELIVERY

FIG. 1.—Schematic of Steady State, Serial Liquid Pipeline Problem

129

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 3: Genetic Algorithms in Pipeline Optimization

calculated using the Darcy-Weisbach and Colebrook-White equations (Streeter and Wylie 1984). For a known pumping unit with known head rise, flow, and efficiency, standard equations are used to calculate the necessary power:

where HP = power consumed; H = total dynamic head rise across pump; Q0 = flow; 7 = specific weight of fluid; e = unit efficiency; and / = pump index subscript.

Together, these relationships specify the pressure-flow-power behav­ior of the serial pipeline system. For the purposes of this paper, think of these relationships as a black box system in which a set of 40 pump status variables Xj is specified as input and a power consumption cal­culation and constraint violations are received as output.

GENETIC ALGORITHMS—How ARE THEY DIFFERENT AND How Do THEY WORK?

To use a genetic algorithm on this or any other problem, our thinking must be adjusted to some GA differences. Genetic algorithms are dif­ferent from the normal search methods encountered in engineering op­timization in the following ways:

1. GAs work with a coding of the parameter set, not with the param­eters themselves.

2. GAs search from a population of points, not from a single point. 3. GAs require only payoff (objective function) information, not trend,

derivative, or other auxiliary data. 4. GAs use probabilistic transition rules, not deterministic transition

rules.

Genetic algorithms require the natural parameter set of the optimi­zation problem to be coded as a finite length string. For example, in the pipeline problem, the decision variables are the set of 40 pump status variables, Xj; x,• = 1 means pump ;' is on and x, = 0 means pump ;' is off. A genetic algorithm requires that alternative solutions are coded as strings. For this problem a very simple coding suggests itself: Simply concatenate successive x, values to form a bit string of length / = 40, i.e., x1x2x3xi . . . X39XJ0. As a further illustration of this coding, the string, 1010101010 ... 101010, is that particular operation where all odd num­bered pumps are on and all even numbered pumps are off. In other problems, finite string codings may require more creative mappings than the current problem. Mapped fixed point codes, floating point codes, Gray codes, and a variety of hybrid coding schemes have been used successfully in a variety of problems. Because GAs work directly with the underlying code, they are difficult to fool because they are not de­pendent upon continuity of the parameter space and derivative exis­tence.

Genetic algorithms work iteration by iteration successively generating and testing a population of strings. The process is similar to a natural

130

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 4: Genetic Algorithms in Pipeline Optimization

population of biological creatures where successive generations of crea­tures are conceived, born, and raised until they are ready to reproduce. This population-by-population approach is different from the more typ­ical search methods of engineering optimization. In many search meth­ods, we move gingerly from a single point in the decision space to the next, using some decision rule to choose how to get to the next point. This point-by-point method is dangerous because it often locates false peaks in multimodal (many peaked) search spaces. GAs work from a data base of points simultaneously (a population of strings) climbing many peaks in parallel, thus reducing the probability of finding a false peak. To get a starting population, some number of strings (typically 50 to 100) is generated at random, or, with some special prior knowledge of good regions of the decision space, seeds may be planted within the popu­lation to help things along. Regardless of the starting population, the operators of genetic algorithm search have found high performance strings quickly in all applications studies to date.

A genetic algorithm only requires payoff (objective function value) in­formation for each of the structures it generates and tests. By contrast, many methods of engineering optimization require derivative informa­tion or, worse yet, complete knowledge of the problem structure and parameters. The payoff-only nature of genetic algorithms places a severe restriction on GA search, one that is surprisingly overcome with the ef­ficient speculation we shall soon observe.

Later, the fundamental operators of a genetic algorithm will be ex­amined, but first, we point out one final difference between GAs and more typical search techniques. Genetic algorithms use probabilistic op­erators to guide their search. By contrast, most common engineering search schemes are deterministic in nature. The explicit use of chance in a directed search process seems strange, at first, but nature is full of examples. Furthermore, we must emphasize that the use of chance in a search does not imply that the scheme is some simple random walk; genetic algorithms are not coin flipping by some fancy name. In fact, we shall soon see how GAs use chance to motivate a rapid and broad search scheme in our practical pipeline problem.

To recap the picture of genetic algorithms thus far, we have assumed that decision variables may be coded as some finite length string over a finite alphabet, often the binary alphabet. Each string is of length I and a population of strings contains a total of n strings. We create an initial population of strings either at random or through the use of specialized information. The genetic algorithm is applied generation by generation using payoff information and randomized operators to guide the crea­tion of new string populations. With this background, we move to de­fine the mechanics of the genetic algorithm operations, which enable GAs to generate a new and improved population of strings from an old population.

A simple genetic algorithm which gives good results is composed of three operators: (1) reproduction; (2) crossover; and (3) mutation.

Reproduction is an operator where an old string is copied into the new population according to that string's fitness. Here, fitness is defined as the nonnegative figure of merit (objective function value) being maxi­mized. Thus, under reproduction, more highly fit strings (those with

131

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 5: Genetic Algorithms in Pipeline Optimization

better objective function values) receive higher numbers of offspring (copies) in the mating pool. There are many different ways to implement the reproduction operator; almost any method that biases selection to­ward fitness seems to work well. In this study, we simply give a pro­portionately higher probability of reproduction selection, p select;, to those strings with higher fitness values /, according to the following distri­bution:

pselect, = - r (4) Mi

Reproduction is thus the survival-of-the-fittest or emphasis step of the genetic algorithm. The best strings make more copies for mating than the worst.

After reproduction, simple crossover may proceed in two steps. First, newly reproduced strings in the mating pool are mated at random. Sec­ond, each pair of strings crosses over as follows: An integer position k along the string is selected uniformly at random in the interval (1, I -1). Two new strings are created by swapping all characters between po­sitions 1 and k inclusively.

To see how this works, consider two strings, A and B, of length 7 mated at random from the mating pool created by previous reproduc­tion:

A = fll al a3 a4 A5 a6 a.7 (5)

B = 61 62 63 64 65 66 V7 (6)

Suppose the roll of a die turns up a four. The resulting crossover yields two new strings A' and B' following the partial exchange.

A' = bl b2 b3 64 a5 a6 a.7 (7)

B' = al al «3 A4 b5 66 67 (8)

The mechanics of the reproduction and crossover operators are surpris­ingly simple, involving nothing more complex than string copies and partial string exchanges; however, together the emphasis step of repro­duction and the structured though randomized information exchange of crossover give genetic algorithms much of their power. At first, this seems surprising. How can such simple (computationally trivial) operators re­sult in anything useful, let alone a rapid and relatively robust search mechanism? Furthermore, doesn't it seem a little strange that chance should play such a fundamental role in a directed search process? The answer to the second question was well recognized by the mathemati­cian J. Hardamard (1945):

We shall see a little later that the possibility of imputing discov­ery to pure chance is already excluded. . . . On the contrary, that there is an intervention of chance but also a necessary work of unconsciousness, the latter implying and not contradicting the for­mer. . . . Indeed, it is obvious that invention or discovery, be it in mathematics or anywhere else, takes place by combining ideas.

The suggestion here is that while discovery is not a result of pure chance,

132

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 6: Genetic Algorithms in Pipeline Optimization

it is almost certainly guided by directed serendipity. Furthermore, Har-damard hints that a proper role for chance is to cause the juxtaposition of different notions. It is interesting that genetic algorithms adopt Har-damard's mix of direction and chance in a manner that efficiently builds new solutions from the best partial solutions of previous trials.

To see this, consider a population of n strings over some appropriate alphabet, coded so that each is a complete IDEA or prescription for per­forming a particular task (in the example, each string is a description of how to operate all 40 pumps on a pipeline). Substrings within each string (IDEA) contain various NOTIONS of what's important or relevant to the task. Viewed in this way, the population doesn't just contain a sample of n IDEAS, rather it contains a multitude of NOTIONS and rankings of those NOTIONS for task performance. Genetic algorithms carefully exploit this wealth of information about important NOTIONS by: (1) re­producing quality NOTIONS according to their performance; and (2) crossing these NOTIONS with many other high performance NOTIONS from other strings. Thus, the act of crossover with previous reproduc­tion speculates on new IDEAS constructed from the high performance building blocks (NOTIONS) of past trials.

If reproduction according to fitness combined with crossover gives ge­netic algorithms the bulk of their processing power, what then is the purpose of the mutation operator? Not surprisingly, there is much con­fusion about the role of mutation in genetics (both natural and artificial). We find that mutation plays a decidedly secondary role in the operation of genetic algorithms. Mutation is needed in a genetic algorithm search because even though reproduction and crossover effectively search and recombine extant NOTIONS, occasionally they may become overzealous and lose some potentially useful genetic material (Is or 0s at particular locations). The mutation operator protects against such an irrecoverable loss. Mutation is the occasional (with specified mutation probability Pmutation) random alteration of a string position. In a binary code, this simply means changing a 1 to a 0 and vice versa. By itself, mutation is a random walk through the string space. When used sparingly with re­production and crossover it is an insurance policy against premature loss of important NOTIONS. That the mutation operator plays a secondary role simply means that the frequency of mutation to obtain good results in empirical studies is on the order of 1 mutation per thousand bit (po­sition) transfers. Mutation rates are similarly small in natural popula­tions, which leads to the conclusion that mutation is appropriately con­sidered a secondary mechanism.

The processing power of genetic algorithms may be understood in more rigorous terms by examining the growth rates of the various schemata or similarity templates contained in a population (Holland 1975). We will not dwell on the details of schemata growth here; however, we note in passing that the explicit manipulation of n (population size) strings dur­ing a single generation results in the useful processing of more than n3

similarity templates. This highly leveraged parallel processing is so im­portant we give it a special name, "implicit parallelism," because even though we work in a physical space of n structures we get implicit pro­cessing of many more similarity templates by the action of simple op-

133

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 7: Genetic Algorithms in Pipeline Optimization

erators. It is implicit parallelism that gives GAs the rapid processing ca­pability seen in the next section.

GENETIC ALGORITHM OPTIMIZES PIPELINE

In this section, the simple three-operator genetic algorithm is applied to the on-off control of 40 pumps in 10 pump stations along a serial pipeline. The pipeline coefficients used for this problem are presented in Table 1. Four pumps are contained in each of four 10-pump stations. Pump heads, efficiencies, and power consumptions are presented in Ta­ble 2.

The pipeline model of a previous section has been programmed in Pascal. Constraints have been adjoined to the problem with an exterior penalty function (Avriel 1976); any constraint violation is squared and added to the cost after multiplication by any appropriate penalty coef­ficient \ . A nominal value of penalty coefficient X = 0.5 horsepower/ (psi)2 (7.847 X 10"9 kW-m4/N2) has been used and held constant throughout the runs.

The simple genetic algorithm of a previous section has been pro­grammed in Pascal. In the next section, some code excerpts will be ex­amined, to show that the generations involved are truly straightforward. The genetic algorithm is run with the following parameters:

^crossover U. / * \ " J

Pmutation = 0 . 0 1 (10)

"population 1 0 0 (11)

These values are consistent with Dejong's (1975) suggestions for high crossover probability, low mutation probability, and moderate popula­tion size. Good results have been obtained in other empirical studies with relatively small population sizes (n ~ 35-200), mutation probabil­ities inversely proportional to the population size [pmulation ~ (0.1/«)-(5/ ri)], and high crossover probabilities (Pcr0ssover ~ 0.5-1.0). It is important

TABLE 1.—Pipeline Coefficients

Station number

(1)

1 2 3 4 5 6 7 8 9

10

Pipe length (ft)

(2)

1.5930E+05 7.9675E+04 1.3274E+05 1.5930E+05 7.9675E+04 1.5930E+05 1.5930E+05 1.4604E+05 4.2504E+04 2.6558E+04

Psmin (psi) (3)

O.OOOE+00 2.500E+01 2.500E+01 2.500E+01 2.500E+01 2.500E+01 2.500E+01 2.500E+01 2.500E+01 2.500E+01

Psmax (psi) (4)

2.0000E+02 2.0000E+02 2.0000E+02 4.0000E+02 2.5000E+02 3.5000E+02 4.5000E+02 5.5000E+02 4.0000E+02 4.0000E+02

Pdmax (psi) (5)

9.0000E+02 9.0000E+02 8.0000E+02 9.0000E+02 9.0000E+02 9.0000E+02 1.1000E+03 1.1000E+03 1.1000E+03 1.1000E+03

Pioss (psi) (6)

3.0985E+02 1.5498E+02 2.5819E+02 3.0985E+02 1.5498E+02 3.0985E+02 3.0985E+02 2.8407E+02 8.2675E+01 5.1658E+01

Note: Q0 = 19 cfs; P„ = 0 psig; spectfic gravity = 0.86; diameter = 2.2 ft; / = 0.0296; 1 cfs = 0.028 m3/s; 1 psi = 6,894.4 N/m2; and 1 ft = 0.3049 m.

134

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 8: Genetic Algorithms in Pipeline Optimization

TABLE 2.—Pump Coefficients

Pump number (1)

1 2 3 4 5 6 7 8 9

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40

Pressure rise (psi) (2)

1.7322E+02 1.7322E+02 1.7322E+02 8.6620E+01 1.9178E+02 1.9178E+02 1.9178E+02 9.5890E+01 1.9178E+02 1.9178E+02 1.9178E+02 9.5890E+01 1.0007E+02 1.0007E+02 1.0007E+02 5.0040E+01 1.1035E+02 1.1035E+02 1.1035E+02 5.5180E+01 2.0982E+02 2.0982E+02 2.0982E+02 1.0491E+02 2.2913E+02 2.2913E+02 2.2913E+02 1.1456E+02 1.6951E+02 1.6951E+02 1.6951E+02 8.4750E+01 2.1132E+02 2.1132E+02 2.1132E+02 1.0566E+02 1.9231E+02 1.9231E+02 1.9231E+02 9.6150E+01

e, efficiency (3)

9.7890E-01 9.7890E-01 9.7890E-01 9.7890E-01 9.8100E-01 9.8100E-01 9.8100E-01 9.8100E-01 9.8100E-01 9.8100E-01 9.8100E-01 9.8100E-01 9.6300E-01 9.6300E-01 9.6300E-01 9.6300E-01 9.6600E-01 9.6600E-01 9.6600E-01 9.6600E-01 9.8300E-01 9.8300E-01 9.8300E-01 9.8300E-01 9.8400E-01 9.8400E-01 9.8400E-01 9.8400E-01 9.7000E-01 9.7000E-01 9.7000E-01 9.7000E-01 9.8000E-01 9.8000E-01 9.8000E-01 9.8000E-01 9.6000E-01 9.6000E-01 9.6000E-01 9.6000E-01

HP, power (horsepower) (4)

8.8026E+02 8.8026E+02 8.8026E+02 4.4018E+02 9.7250E+02 9.7250E+02 9.7250E+02 4.8625E+02 9.7250E+02 9.7250E+02 9.7250E+02 4.8625E+02 5.1693E+02 5.1693E+02 5.1693E+02 2.5849E+02 5.6826E+02 5.6826E+02 5.6826E+02 2.8416E+02 1.0618E+03 1.0618E+03 1.0618E+03 5.3090E+02 1.1584E+03 1.1584E+03 1.1584E+03 5.7915E+02 8.6931E+02 8.6931E+02 8.6931E+02 4.3463E+02 1.0727E+03 1.0727E+03 1.0727E+03 5.3634E+02 9.9652E+02 9.9652E+02 9.9652E+02 4.9823E+02

Note: 1 horsepower = 0.746 kW; 1 psi = 6,894.4 N/m2,

to remember, however, that genetic algorithms are not highly sensitive to these parameters. Nominal values have worked well in applications studies to date.

Fig. 2 shows the best-of-generation cost (penalized power consump­tion) for three independent runs (different starting populations) as it de-

135

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 9: Genetic Algorithms in Pipeline Optimization

ii

BEST GENERATION RESULTS (BY GENETIC ALGORITHM)

FIG. 2.—Best-of-Generation Cost versus Generation

creases with successive generations. Recall that each generation repre­sents the creation of n = 100 strings where n-paossovet = 100'0.70 = 70 of them are new. It is interesting that near-optimal results are obtained after only 50 generations (approximately 3,500 new function evalua­tions).

Fig. 3 shows the generation average cost versus generation for the three independent runs. Of course, population average cost lags pop­ulation best cost, but the trend toward improvement is clear in all three cases.

To get some physical feel for the type of near-optimal solution gen­erated by the genetic algorithm, the best solution of run 2 is considered

AVERAGE GENERATION RESULTS (BY GENETIC ALGORITHM)

GENERATION » RUN #3

FIG. 3.—Average Generation Cost versus Generation

136

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 10: Genetic Algorithms in Pipeline Optimization

PRESSURE PROFILE COMPARISON (U1P Ss OENETIC)

1.00 -

STATION NUUBBR a GENETIC PDMAX PSMAX PSHIN

MIP

FIG. 4.—Pressure Profile Comparison GA versus Integer Programming Solution

in Fig. 4. In this figure, the GA-generated pressure profile is compared to the optimal result from integer programming. Maximum and mini­mum suction pressure constraints, as well as maximum discharge pres­sure constraints are shown as dashed lines. The GA-generated and op­timal solutions show some similar trends, although there are differences, resulting from minor excursions in operation at several pump stations. Nonetheless, in all three cases the genetic algorithm finds very near-optimal operations.

Examining these results further, we see that in all three runs we come very close to the optimal value of 1.118 X 104 horsepower (8,337 kW). This value was independently calculated using a branch and bound, mixed integer programming code available through IBM called MIP/370. In Ta­ble 3, the best cost of each run and its percentage difference from the optimal value are presented. In all three cases very near-optimal results were obtained, even though the size of the search space is huge (240 = 1.1 x 1012) and the number of points explored is small (approximately 3,500). To put this performance in perspective, if we were to search for the best person among the world's 4.5 billion people as rapidly as the

TABLE 3.—Run Comparisons

Run number

(D 1 2 3

Average

Best power (horsepower) (2)

1.133 X 104

1.126 x 10" 1.120 x 104

1.126 x 104

Difference from optimal (%) (3)

1.34 0.72 0.18 0.72

Note: 1 horsepower = 0.746 kW.

137

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 11: Genetic Algorithms in Pipeline Optimization

genetic algorithm, we would only talk to 15 people before making our choice.

CODE EXCERPTS FROM SIMPLE GENETIC ALGORITHM

In this section, several Pascal computer excerpts from the genetic al­gorithm used in this paper are examined. Specifically, important data declarations and three salient procedures, i.e., select, crossover, and mutation, are explained.

Fig. 5 is an abstracted portion of the data declarations of the genetic algorithm. The artificial chromosomes (bit strings) are defined as arrays of alleles (bits of boolean data type). The individual is a record consisting of the string itself plus several variables of type real: the fitness, the cost (penalized), and the objective function value (unpenalized). The popu­lation type is simply an array of chromosome records and two popula­tions are created: an old population (oldpop) and a new population (newpop). Other GA parameters are presented, such as the string length (lchrom), population size (popsize), generation number (gen), maxi­mum generation number (maxgen), probability of crossover (pcross), and probability of mutation (pmutation).

The code excerpt shown in Fig. 6 implements a single selection ac­cording to the probability distribution defined by the equation / / 2 / j . The code assumes that the sum of the fitness values, 2$ , has been cal­culated and stored elsewhere for the old population, oldpop. Also, the

const maxpop = 100; maxstring = 40;

type allele = boolean; { Allele ~ bit position } chromosome = array[l..maxstring] of allele; { String of bits } individual = record

chrom:chromosome; { Genotype = bit string } x:real; { Phenotype = unsigned integer ) objective, fitness, cost:real; C Obj. £ fit. fen. } parentl, parent2, xsite:integer; { parents fi cross pt )

end; population - arrayfl..maxpop] of individual;

var oldpop, newpop:population; { Two non-overlapping populations ) popsize, lchrom, gen, maxgen:integer; { Integerglobal variables } pcross, pmutation, sumfitness:real; { Real global variables } nmutation, ncross:integer; { Integer statistics } avg, max, mimreal; ( Real statistics }

FIG. 5.—Data Declarations from Simple Genetic Algorithm

function select(popsize:integer; sumfitness:real; var poppopulation):integer;

{ Select a single individual via roulette wheel selection ) var rand, partsumireal; { Random point on wheel, partial sum }

j:integer; { population index } begin partsum := 0.0; j := 0; { Zero out counter and accumulator } rand := random * sumfitness; { Wheel point calc. uses random number [0,1] } repeat { Find wheel slot } j := j + 1; partsum := partsum + pop[j].fitness;

until {partsum >= rand) or (j = popsize); { Return individual number } select := j ;

end;

FIG. 6.—Function Select (Reproduction) from Simple Genetic Algorithm

138

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 12: Genetic Algorithms in Pipeline Optimization

procedure crossover(var parentl, parent2, childl, child2:chromosome; var lchrom, ncross, nmutation, j cross:integer; var pcross, pmutation:real);

{ Cross 2 parent strings, place in 2 child strings } var j:integer; begin if flip(pcross) then begin { Do crossover with p(cross) }

jcross := rnd(l,lchrom~l); ( Cross between 1 and 1-1 ) ncross := r.cross + 1; { Increment crossover counter } end else ( Otherwise set cross site to force mutation jcross := lchrom;

( 1st exchange, 1 to 1 and 2 to 2 } for j ;= 1 to jcross do begin

childl[j] := mutation(parentl[j], pmutation, nmutation); child2[j j : = mutation(parent2[j], pmutation, nmutation);

end; { 2nd exchange, 1 to 2 and 2 to 1 ] if jcrossolchrom then { Skip if cross site is lchrom—no crossover } for j : = jcross+1 to lchrom do begin childl[j] := mutation(parent2[j], pmutation, nmutation); child2[jj := mutation(parentl[j], pmutation, nmutation);

end; end;

FIG. 7.—Procedure Crossover from Simple Genetic Algorithm

function mutation(alleleval:allele; pmutation:real; var nmutation:integer):allele;

{ Mutate an allele w/ pmutation, count number of mutations } var mutate:boolean; begin mutate := flip(pmutation); { Flip the biased coin } if mutate then begin

nmutation := nmutation + 1 ; mutation := not alleleval; { Change bit value }

end else mutation := alleleval; { No change }

end;

FIG. 8.—Function Mutation from Simple Genetic Algorithm

routine assumes the existence of a pseudo-random number generator (random), which generates numbers on the real interval [0,1]. The op­eration is simply a linear search until the appropriate slot of the weighted roulette wheel is found. A binary search may be used to speed this op­eration if cumulative distribution function values are stored.

The crossover operator is nicely implemented in the code portion shown in Fig. 7. In this procedure, the two parent chromosomes generate two children chromosomes as described earlier. Two functions are assumed in this routine, flip and rnd. Flip returns true with a specified proba­bility, while rnd returns a random integer between specified lower and upper limits.

The last bit of code presented implements the mutation operator for a single bit, as shown in Fig. 8. Like crossover, this routine uses the biased coin toss function, flip. Otherwise, the implementation is quite straightforward.

Together, these code excerpts form the core of the code used in this study. It is interesting to note the surprising performance of such simple operators.

SUMMARY AND CONCLUSIONS

In this paper, the mechanics, power, and application of a genetic al- * gorithm in the approximate solution of a pipeline engineering optimi-

139

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 13: Genetic Algorithms in Pipeline Optimization

zation problem have been examined. A simple GA consisting of repro­duction, crossover, and mutation finds very near-optimal operating p u m p schedules quickly after examining a minute portion of the operating al­ternatives in a 40-pump, serial liquid pipeline problem.

This work and other current investigations demonstrate that the ge­netic algorithm method is a broad spectrum, approximate search pro­cedure with application in diverse problem areas. Because the procedure works with a coding of the decision variables instead of the decision variables themselves, it is difficult to fool. The method does not depend upon underlying continuity of the search space and requires no infor­mation other than payoff values. Furthermore, GAs work from a pop­ulation of points and so have a more global perspective than many en­gineering optimization procedures. Together, these qualities should permit the extension of these methods to more complex, highly dimensional problems in the near future.

ACKNOWLEDGMENTS

This material is based upon work supported by the National Science Foundation under grant MSM-8451610. The writers also wish to ac­knowledge the programming skill and assistance provided by Clay Bridges, an undergraduate student assistant at the University of Ala­bama, in the performance of the MIP/370 runs .

APPENDIX I.—REFERENCES

Avriel, M. (1976). Nonlinear programming analysis and methods. Prentice-Hall, En-glewood Cliffs, N.J.

Booker, L. B. (1982). "Intelligent behavior as an adaptation to the task environ­ment," dissertation presented to the University of Michigan, at Ann Arbor, Mich., in partial fulfillment of the requirements for the degree of Doctor of Philosophy.

Davis, L. (1985). "Job shop scheduling with genetic algorithms." Proceedings of an International Conference on Genetic Algorithms and Their Applications. Carnegie-Mellon Univ., Pittsburgh, Pa., 136-140.

DeJong, K. A. (1975). "Analysis of the behavior of a class of genetic adaptive systems," dissertation presented to the University of Michigan, at Ann Arbor, Mich., in partial fulfillment of the requirements for the degree of Doctor of Philosophy.

Goldberg, D. E. (1983). "Computer-aided gas pipeline operation using genetic algorithms and rule learning," dissertation presented to the University of Michigan, at Ann Arbor, Mich., in partial fulfillment of the requirements for the degree of Doctor of Philosophy.

Goldberg, D. E., and Samtani, M. P. (1986). "Engineering optimization via ge­netic algorithm." Proceedings of the Ninth Conference on Electronic Computation. ASCE, New York, N.Y., 471-482.

Goldberg, D. E., and Thomas, A. L. (1986). "Genetic algorithms: A bibliography 1962-1986." TCGA Report No. 86001. The Clearinghouse for Genetic Algo­rithms, Department of Engineering Mechanics, Univ. of Alabama, University, Ala.

Grefenstette, J. J., Ed. (1985). Proceedings of an International Conference on Genetic Algorithms and Their Applications. Carnegie-Mellon Univ., Pittsburgh, Pa.

Grefenstette, J. J., and Fitzpatrick, J. M. (1985). "Genetic search with approxi­mate function evaluation." Proceedings of an International Conference on Genetic Algorithms and Their Application. Carnegie-Mellon Univ., Pittsburgh, Pa., 112-120.

140

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.

Page 14: Genetic Algorithms in Pipeline Optimization

Hardamard, J. (1945). The psychology of invention in the mathematical field. Princeton University Press, Princeton, N.J.

Holland, J. H. (1975). Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor, Mich.

Holland, J. H., and Reitman, J. S. (1978). "Cognitive systems based on adaptive algorithms." Pattern directed inference systems. Academic Press, New York, N.Y 313-329.

Hollstien, R. B. (1971). "Artificial genetic adaptation in computer control sys­tems," dissertation presented to the University of Michigan, at Ann Arbor, Mich., in partial fulfillment of the requirements for the degree of Doctor of Philosophy.

Smith, D., and Davis, L. (1985). Layout synthesis of random logic using genetic al­gorithms. Manuscript submitted for publication.

Streeter, V. L., and Wylie, E. B. (1984). Fluid mechanics, 8th ed. McGraw-Hill, New York, N.Y.

Wilson, S. W. (1985). "Knowledge growth in an artificial animal." Proceedings of the 4th Yale Workshop on Applications of Adaptive Systems Theory. Yale Univ., New Haven, Conn., 98-104.

APPENDIX II.—NOTATION

The following symbols are used in this paper:

A B e

f H

HP P

"crossover

"mutation

pselect n Q X

y X

= = = = = = = = = = = = = = =

coded string; coded string; pump unit efficiency; fitness value; piezometric head; power consumed; pressure; probability of crossover; probability of mutation; probability of selection; size of population; flow; status of pump; specific weight of fluid; and penalty coefficient.

Subscripts and Superscripts d = discharge; i = index; ;' = index;

max = maximum; min = minimum;

o = initial; s = suction; and ' = new (forward time step).

141

J. Comput. Civ. Eng. 1987.1:128-141.

Dow

nloa

ded

from

asc

elib

rary

.org

by

Uni

vers

ity o

f T

exas

At A

ustin

on

09/2

7/14

. Cop

yrig

ht A

SCE

. For

per

sona

l use

onl

y; a

ll ri

ghts

res

erve

d.