global multiobjective optimization using evolutionary algorithms

14
Journal of Heuristics, 6: 347–360 (2000) c 2000 Kluwer Academic Publishers Global Multiobjective Optimization Using Evolutionary Algorithms THOMAS HANNE Institute for Techno- and Economathematics (ITWM), Dept. of Optimization, Gottlieb-Daimler-Str. 49, 67663 Kaiserslantern, Germany email: [email protected] Abstract Since the 60s, several approaches (genetic algorithms, evolution strategies etc.) have been developed which apply evolutionary concepts for simulation and optimization purposes. Also in the area of multiobjective programming, such approaches (mainly genetic algorithms) have already been used (Evolutionary Computation 3(1), 1–16). In our presentation, we consider a generalization of common approaches like evolution strategies: a multiobjec- tive evolutionary algorithm (MOEA) for analyzing decision problems with alternatives taken from a real-valued vector space and evaluated according to several objective functions. The algorithm is implemented within the Learning Object-Oriented Problem Solver (LOOPS) framework developed by the author. Various test problems are analyzed using the MOEA: (multiobjective) linear programming, convex programming, and global program- ming. Especially for ‘hard’ problems with disconnected or local efficient regions, the algorithms seems to be a useful tool. Key Words: multiobjective optimization, efficiency, stochastic search, evolutionary algorithms, selection mechanism 1. Introduction Since the 60s, several approaches utilitzing the idea of evolution in a technical context, i.e. for optimization and simulation with computers, have been developed independently (B¨ ack and Schwefel, 1992, 1993). Koza (1992, p. 17) summarizes four central assumptions for such evolutionary algorithms: “An entity has the ability to reproduce itself. There is a population of such self-reproducing entities. There is some variety among the self-reproducing entities. Some difference in ability to survive in the environment is associated with the variety.” Evolution strategies (see B¨ ack, Hoffmeister, and Schwefel, 1991) belong to a class of concepts used to perform robust optimization, which can cope with, for instance, local optima and nondifferentiable or even unknown, only experimentally accessible optimization functions. This concept, mainly developed by Rechenberg (1973) and Schwefel (1977, 1981), is based on a population of feasible alternatives represented by search points in a vector space R n . Compared with genetic algorithms (see Holland, 1975; Goldberg, 1989), another popular concept for evolutionary algorithms, evolution strategies are not based on binary coding (bit

Upload: thomas-hanne

Post on 02-Aug-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Global Multiobjective Optimization Using Evolutionary Algorithms

Journal of Heuristics, 6: 347–360 (2000)c© 2000 Kluwer Academic Publishers

Global Multiobjective Optimization UsingEvolutionary Algorithms

THOMAS HANNEInstitute for Techno- and Economathematics (ITWM), Dept. of Optimization, Gottlieb-Daimler-Str. 49,67663 Kaiserslantern, Germanyemail: [email protected]

Abstract

Since the 60s, several approaches (genetic algorithms, evolution strategies etc.) have been developed which applyevolutionary concepts for simulation and optimization purposes. Also in the area of multiobjective programming,such approaches (mainly genetic algorithms) have already been used (Evolutionary Computation3(1), 1–16).

In our presentation, we consider a generalization of common approaches like evolution strategies: a multiobjec-tive evolutionary algorithm (MOEA) for analyzing decision problems with alternatives taken from a real-valuedvector space and evaluated according to several objective functions. The algorithm is implemented within theLearning Object-Oriented Problem Solver (LOOPS) framework developed by the author. Various test problemsare analyzed using the MOEA: (multiobjective) linear programming, convex programming, and global program-ming. Especially for ‘hard’ problems with disconnected or local efficient regions, the algorithms seems to be auseful tool.

Key Words: multiobjective optimization, efficiency, stochastic search, evolutionary algorithms, selectionmechanism

1. Introduction

Since the 60s, several approaches utilitzing the idea of evolution in a technical context, i.e.for optimization and simulation with computers, have been developed independently (B¨ackand Schwefel, 1992, 1993). Koza (1992, p. 17) summarizes four central assumptions forsuch evolutionary algorithms:

“An entity has the ability to reproduce itself.There is a population of such self-reproducing entities.There is some variety among the self-reproducing entities.Some difference in ability to survive in the environment is associated with the variety.”

Evolution strategies (see B¨ack, Hoffmeister, and Schwefel, 1991) belong to a class ofconcepts used to perform robust optimization, which can cope with, for instance, localoptima and nondifferentiable or even unknown, only experimentally accessible optimizationfunctions. This concept, mainly developed by Rechenberg (1973) and Schwefel (1977,1981), is based on a population of feasible alternatives represented by search points in avector spaceRn.

Compared with genetic algorithms (see Holland, 1975; Goldberg, 1989), another popularconcept for evolutionary algorithms, evolution strategies are not based on binary coding (bit

Page 2: Global Multiobjective Optimization Using Evolutionary Algorithms

348 HANNE

strings). Instead floating point numbers are used for encoding the variables. This seems tobe a more appropriate way of coding continuous variables (like those for MOP problems).While in genetic algorithms recombination is the most important genetic operator, in earlyevolution strategies, mutation was the main operator of evolutionary change. Several otherdifferences of these two concepts are discussed by Hoffmeister and B¨ack (1992). Today,however, it is usually assumed that the evolutionary concept, especially the data structurefor the entities, should be generalized and adapted to the specific problem situation because“the representation scheme can severely limit the windows by which a system observes itsworld” (Koza, 1992, p. 63). Usually, such more general approaches are called evolutionaryalgorithms (Back and Hoffmeister, 1991), evolutionary computation, or evolution programs(Michalewicz, 1994, p. 1).

In our article, we will discuss the potential of evolutionary algorithms (mainly basedon ideas coming from evolution strategies) for multiobjective programming (MOP). The-oretical and methodological aspects of MOP are discussed, for instance, by Sawaragi,Nakayama, and Tanino (1985), Steuer (1986), Vincke (1992), and Zeleny (1982). While inmultiobjective programming linear or convex problems (similarly to the situation in scalaroptimization) are well-studied and methodologically covered there is not much research on‘harder’ problems, i.e. those with multiple disconnected efficient regions (see Section 5.4.for details). In contrast to genetic algorithms, evolution strategies have so far only beenscarcely developed and applied for multiobjective programming (see Kursawe, 1991, 1992).We will outline the concept and implementation of an evolutionary algorithm approach tomultiobjective programming and apply it to several test problems.

2. Evolution strategies for scalar optimization

Usually, an evolution strategy is initialized with a population of random (feasible) entities(also called alternatives, individuals, or chromosomes) which represent potential solutionsto the given (optimization) problem. These entities are reproduced such that attributes fromdifferent parents can be given to an offspring (recombination, crossing-over). Also randomerrors (mutations) can occur during reproduction. For each entity a fitness function isdefined which possibly depends on the environment. The offspring (possibly including theparents) are evaluated according to their fitness such that only ‘better’ ones are selectedas parents of the next generation (elitist selection). In some approaches, the reproductionrate or probability of reproduction of an entity depends on its fitness (non-elitist selection).This process is iterated until a stopping criterion is fulfilled, e.g. a maximum number ofgenerations.

Evolution strategies have originally been defined as a stochastic search method for scalar(nonlinear) optimization problems (Rechenberg, 1973; Schwefel, 1977) of the followingform:

(SOP) maxa∈A

f (a), for f : Rn→ R,

with

A = {a ∈ Rn : g(a)≤ 0}, g : Rn→ Rm.

Page 3: Global Multiobjective Optimization Using Evolutionary Algorithms

GLOBAL MULTIOBJECTIVE OPTIMIZATION 349

Thei th individual of generationt can be written as an (at least)n-dimensional vectorait withcomponentsait

1 , . . . ,aitn representing an alternative as a point inRn. Possibly additional

componentsaitk , k > n, are used for storing control information of the evolutionary process

(strategic components). The(µ+λ)-evolution strategy(µ, λ ∈ N) starts with a population(t = 0) ofµ (feasible) parentsai 0 ∈ Awhich produceλ offspring. During the reproduction,mutations occur as(0, σ t )-normally distributed vector-valued random variableszit ∈ Rn,such that offspringait+1, i ∈ {1, . . . , λ}, is calculated as

ait+1 = ajt + zit

for j ∈ {1, . . . , µ}. For each offspring the fitness functionf is evaluated. If a restrictiongis violated the fitness can be modified using a penalty function, e.g.

f new(a) ={

f (a) if g(a)≤ 0

const otherwise.

with const< f (a) for all a ∈ A. Alternatively only ‘feasible mutations’ait+1 ∈ A areallowed. This is especially important for the comma-evolution strategy ((µ, λ)-ES) withλ > µ where parents ‘live’ one generation only. Here, possibly more thanλ offspring haveto be generated to ensure a constant population size ofµ feasible alternatives (Schwefel,1977, p. 176). Theµ best of the offspring become parents of the next generation int + 1.With the alternative(µ + λ)-evolution strategy, the live span of parents is not limited.In the selection step, offspring and parents are considered such that parents can surviveseveral generations if they are ‘fitter’ than their offspring. (This also prevents a temporarydeterioration of population fitness.)

The distribution parameterσ t = (σ t1, . . . , σ

tn)∈ Rn, σ t

i > 0 for i ∈ {1, . . . ,n}, for themutations can be interpreted as a step size vector analogously to deterministic search strate-gies. Based on theoretical considerations, Rechenberg (1973) proposes a 1/5 success rulefurther specified by Schwefel (1977, p. 128–132). This rule is based on an increase of thestep sizes if on average the portion of ‘successful’ offspring (i.e. with increased fitness)is larger than 1/5. If the portion is less than 1/5 the step sizes are decreased. This stepsize control does not support direction-specific adaptations. It is only possible to prescribeconstant scaling factors for the co-ordinate directions because theσi remain in constantproportions (as long as they do not reach a minimal value>0). The 1/5 rule fails whenthere are no continuous partial first derivatives of the objective function (Schwefel, 1977,p. 136f). Because of these problems, Schwefel (1977, p. 165, 261) proposes another, morenatural concept of step size control which allows an automatic scaling of the variables:The step size parameters are themselves controlled evolutionarily by addingn step sizeparameters to then alternative parameters. Both types of entity parameters are mutatedby normally distributed random variables. The step sizes are then controlled indirectly bythe selection mechanism (with an unchanged fitness function). Schwefel also discussessome other mutation concepts which, for instance, allow a learning of search directionsindependently from the co-ordinate axes.

Another important mechanism in evolution strategies introduced by Schwefel (1977,p. 170–173) is recombination. This simulation of sexual reproduction is based on the

Page 4: Global Multiobjective Optimization Using Evolutionary Algorithms

350 HANNE

idea that the genetic material of an offspring does not come from a single parent but fromseveral (two in nature). Schwefel proposes to choose each component of an offspring vectorrandomly (with equal probabilities 1/µ) from the parent population. Recombination can alsobe used for the control parameters of an entity. Because of stability reasons intermediaryrecombination is proposed such that the mean value of two parents is inherited to theoffspring.

3. A multiobjective programming problem

A multiobjective programming (MOP) problem (minimization) can be defined as follows:

(MOP) mina∈A

f (a) with f : Rn→ Rq, q ≥ 2,

with

A = {a ∈ Rn : gi (a) ≤ 0, i ∈ {1, . . . ,m}} 6= ∅

f is a vector-valued objective function. The functionsfk : A → R with fk(a) = zk

(k ∈ {1, . . . ,q},a ∈ A), for f (a) = (z1, . . . , zq), are called objective functions (orcriteria). A is the set of (feasible) alternatives (or solutions) of the decision problem. Thegi : Rn→ R are called restriction functions.

For the objective spaceRq and the decision spaceRn the following binary relations andproperties are considered (x, y ∈ Rq):

x≤ y iff xk≤ yk ∀k ∈ {1, . . . ,q},x≤ y iff x≤ y and x 6= y.

(Rq,≤) is a partial order. The setE(A, f ) of (functional) efficient (or Pareto-optimal)solutions is defined as

E(A, f ) := {a ∈ A : ¬∃b ∈ A : f (a) ≤ f (b)}.

The concept of efficiency (see, e.g., Gal, 1986) is the most important solution concept inMOP and usually regarded as the meaning of ‘min’ in a vector-valued context. Since theset of efficient solutions usually consists of many solutions in practical decision problems,additional information is ascertained to come to something like a ‘compromise solution’.

In our article, we do not discuss approaches that describe how such ‘compromise solu-tions’ can be obtained. Many methods for this type of problem based, for instance, on utilitytheory, reference point approaches, or pairwise comparisons of alternatives have been pro-posed in the literature (see, e.g., Vincke, 1992). Instead, we discuss how an approximationof the efficient set (especially for ‘hard problems’ like MOP problems with disconnectedor local efficient regions) represented by a finite number of alternatives can be generated.

Page 5: Global Multiobjective Optimization Using Evolutionary Algorithms

GLOBAL MULTIOBJECTIVE OPTIMIZATION 351

4. A multiobjective evolutionary algorithm (MOEA)

Compared with scalar optimization, the multiobjective nature of the MOP problem causesdifficulties for the selection step of an evolutionary algorithm. Other steps like mutation orrecombination of alternative values need not be affected by the multiobjective nature of thealternative evaluations.

In the scalar case, alternatives are judged by a single (real-valued) objective functionwhich allows to define a linear order on the objective evaluations. With this, alternativescan be (completely) rank-ordered, a best alternative can be defined and so on. (Canonical)genetic algorithms then define probabilities of an alternatives reproduction based on itsrelative fitness. In evolution strategies usually an elitist selection strategy is applied whichchooses theµ best of theλ children (comma strategy) or of theλ children andµ parentstogether (plus strategy) as parents for the next generation.

Considering a multiobjective evaluation of alternatives, these and similar concepts cannotbe applied since only a partial order (in objective space) or Pareto order (in decision space)is naturally defined on the objective evaluations. This implies that there may be alternativeswhich are not comparable (better or worse) considering fitness. There is no obvious wayto define a ranking order, probabilities for reproduction etc. for which fitness serves inscalar optimization. In the literature (see, e.g., Fonseca and Fleming, 1993, 1995; Tamaki,Kita, and Kobayashi, 1996; Horn, 1997, for surveys) different approaches to overcome thisproblem have been proposed.

A quite simple approach is to define a scalarization function which maps theq objectivefunctions to a single aggregated one such that a problem of type (SOP) can be analyzed.Approaches based on scalarization (see Jahn, 1984, for a discussion of theoretical aspects)are also often used in multicriteria decision making (MCDM), for instance in utility theory.Other MCDM scalarization concepts applicable in evolutionary algorithms are additiveweighting and reference point approaches. Using such a scalarization function for theqobjectives the familiar selection processes for (scalar) evolutionary algorithms can be used.

Some problems may be involved with such scalarization approaches: For instance, theydo not work well in approximating the efficient set, they possibly have difficulties generatingall efficient solutions in the concave case (see Fonesca and Fleming, 1995), or they do notallow a ‘truly’ multiobjective analysis etc.

Some modifications of the scalarizing concept have been proposed to allow a generationof a diversified set of solutions approximating the efficient set, for instance the usage ofdifferent scalar fitness functions, e.g. randomly one of the several objective functions in eachselection step (see, e.g., Kursawe, 1991, 1992). As another, more consequent alternativeof using EAs for MOP, approaches have been proposed which use the Pareto order of thealternatives only. Some of these approaches are based on pairwise comparisons, a kindof tournament selection, others consider the alternative set of the population in total. Forinstance, an alternative is judged by the number of other alternatives which dominate it (seebelow).

The MOEA is implemented within the Learning Object-Oriented Problem Solver(LOOPS) as a classpopulwhich is formally ametamethod as discussed by Hanne (1997a,1997b). An object ofpopul consists of data from the source object to be optimized which

Page 6: Global Multiobjective Optimization Using Evolutionary Algorithms

352 HANNE

are here in the case of MOP search points inRn. The adaptation to the objects (the specificdata structures) to be optimized (and thus to specific problem types) is done in a way thatseveral procedures of the EA are implemented locally in the class of these objects. The mostimportant of these procedures are those for mutation, those for recombination, and thosefor calculating the fitness. The EA then mainly consists of the framework which realizesthe evolutionary loop and the selection process. The population of the EA is initialized withthe data of one feasible alternative. Mutation serves to generate individuals with disperseddata during later generations.

An adaptation to the given MOP problem can additionally be performed using severalparameters of the MOEA. These are the number of parentsµ, the number of offspringλ, the(maximum) number of generations, the type of selection (deterministic elitist, stochasticnon-elitist), the type of strategy (plus or comma strategy), the type of recombination (norecombination, pairwise recombination, recombination within the whole population), thedegree of intermediary recombination, and a parameter for the method-local self-control ofmutation rates. (The 1/5-rule is not applicable for the multicriteria case)

The intermediary recombination ratio allows a mixture between intermediary and non-intermediary recombination. The parameter for method-local recombination allows a (self-)mutation of mutation rates. This process and also the utilization (interpretation) of therecombination type parameter are done locally depending on the method. Depending onthe choice of parameters the evolutionary algorithm can be forced to work mainly like oneof the traditional types (ES, GA). To make it work like a (traditional) GA, the numbersof parents and offspring are equated, a comma strategy is chosen, stochastic non-elitistselection, pairwise recombination, and a degree of intermediary recombination of 0.

Our implementation of evolutionary algorithms and MCDM methods allows the unifica-tion of different approaches to realize the multiobjective selection step in an evolutionaryalgorithm. This is done as follows: Different MCDM methods are implemented as object-oriented classes. This means that these methods are available as objects such that there existsa well-defined, common message interface for their application. Any of these MCDM meth-ods can then be used for scalar fitness evaluation within the selection step. The link betweenthe MOEA and a particular MCDM method can be handled in a flexible way.

The MCDM methods considered are not restricted to scalarizing approaches. Also, forinstance, the judging of an alternative according to the number of alternatives by which itis dominated (calleddominance grade) is defined as an MCDM method. The dominancegrade works as an auxiliary fitness function. As discussed above such a selection schemebased on the Pareto order only seems to overcome some of the problems caused by otherapproaches. A possible disadvantage is that the discrimination power of this criterion is notvery high. Esp. in high dimensional problems or with a developed population many (morethanµ) of the considered alternatives are efficient. All these alternative have the samedominance grade such that a selection among them becomes arbitrary. Therefore additionalselection rules should be applied as analyzed by Hanne (1999).

A similar concept, implemented within LOOPS, is the thedominance levelwhich isbased on an iterative definition of different layers of efficiency. The first layer includes theefficient alternatives of the population. The second level is defined by those alternativeswhich are efficient when the first layer is removed, etc. The number of the layer of analternative defines an auxiliary fitness function to be minimized.

Page 7: Global Multiobjective Optimization Using Evolutionary Algorithms

GLOBAL MULTIOBJECTIVE OPTIMIZATION 353

Similarly, other methods which have so far not been considered for supporting the multi-objective selection step in an evolutionary algorithm, e.g. outranking approaches or neuralnetworks, can be used easily since they are available as objected-oriented implementationswithin the LOOPS software (see Hanne, 1993, 1997a, 1997b). However, we do not wantto analyze in detail all these possibilities to design a MOEA. Instead, we restrict our testapplications to a MOEA which utilizes a Pareto selection concept only.

5. Some test results

To demonstrate the working and results, we will now apply the multiobjective evolutionaryalgorithm to various test problems. To allow an easy graphical representation of the resultswe consider bicriteria problems only.

For all test applications the MOEA utilizes the same setting of parameters: The populationconsists ofµ = 20 parents andλ = 40 offspring which are considered for selections in theplus strategy ((20+ 40)-ES) framework. Elitist (deterministic) selection is applied, pairwisenon-intermediary recombination (with probability 1.0), and no self-mutation of mutationrates. As the multicriteria selection scheme, generally a Pareto selection as implementedin the dominance grade criterion was chosen. For test purposes, the maximum number ofgenerations was set to 500.

5.1. Application to MOLP

Multiple objective linear programming (MOLP) is probably the best analyzed special caseof MOP. There exist many theoretical works (see, e.g., Gal, 1995; Steuer, 1986), methodsfor determining the whole efficient set in MOLP (see, e.g., Gal, 1977, 1995), and well-established methods for assessing a ‘compromise’ solution (see, e.g., Steuer, 1986). Evo-lutionary algorithms belong to the class of stochastic interior point methods in MOLP.Although interior points methods in general seem to be quite attractive for MOLP, there isno indication that evolutionary algorithms which do not exploit the linear structure of theproblem should give superior results (considering computing time). Here, we apply theMOEA to the very simple MOLP problem:

(P1) minf (x), f : R2→ R2 with fi : x 7→ xi for i ∈ {1, 2}

and

x ∈ X = {x ∈ R2 : x1 ≥ 0, x2 ≥ 0, x1+ x2 ≥ 5}.

Figure 1 shows the results of the application of a MOEA to problem (P1). The popula-tion of calculated points for several generations (t = 10, 50, 100, 500) is represented intwo-dimensional decision space. Obviously, after a quite small number of generations thesolutions of (P1) are very close to the efficient frontier{(x, 5− x) : x ∈ [0, 5]} which isalso shown in figure 1.

Page 8: Global Multiobjective Optimization Using Evolutionary Algorithms

354 HANNE

Figure 1. Results of a MOEA for (P1).

5.2. Application to convex MOP

Next, we consider a (nonlinear) convex MOP. For test purposes, we analyze a quite simpleproblem with quadratic objective functions defined as follows:

(P2) minf (x), f : R2→ R2, fi : x 7→ x2i for i ∈ {1, 2}

with

x ∈ X = {x ∈ R2 : x1 ≥ 0, x2 ≥ 0, x1+ x2 ≥ 5}.

Figure 2 shows the solutions of (P2) generated by a MOEA for several generations (asabove). Similarly to the results for (P1), good approximate solutions representing the ef-ficient set are found after a small number of generations (cf. results for generation 50 or100).

5.3. Application to nonconvex MOP

Let us now consider MOP problems with a nonconvex set of feasible solutions in objectivespace, i.e. such problems with concave regions in the set of efficient solutions. A simpleproblem of this type can be defined similarly as in problem (P2) but with square rootobjective functions:

(P3) minf (x), f : R2→ R2, fi : x 7→ √xi for i ∈ {1, 2}

Page 9: Global Multiobjective Optimization Using Evolutionary Algorithms

GLOBAL MULTIOBJECTIVE OPTIMIZATION 355

Figure 2. Results of a MOEA for (P2).

Figure 3. Results of a MOEA for (P3).

with

x ∈ X = {x ∈ R2 : x1 ≥ 0, x2 ≥ 0, x1+ x2 ≥ 5}.Problems like this bring up difficulties for many MCDM approaches, especially for

weighting approaches and also for reference point approaches (except those based onTchebyshev-like distance measures) as well as utility-based approaches because usually

Page 10: Global Multiobjective Optimization Using Evolutionary Algorithms

356 HANNE

not all efficient solutions can be calculated by such an approach with appropriate parame-ters (see, e.g., Nakayama, 1997). Figure 3 shows the efficient frontier for (P3) and results ofthe MOEA for generations 10, 50, 100, and 500. Contrary to approaches using scalarization,the nonconvex region shows for the MOEA even a concentration of solutions.

5.4. Application to MOP with disconnected efficient regions

Let us now consider a nonconvex MOP where the efficient set is not connected. A closedpoint set is calledconnectedif it cannot be represented as union of two nonempty, disjoint,closed sets. An arbitrary point setM is called connected if for any two points inM thereexists a closed and connected subset ofM which includes these points.

The efficient setE(A, f ) may be disconnected because the feasible set of alternativesX or the objective functions are not convex (see White, 1982, p. 101f, for the case ofmaximization). If subsets of efficient alternatives (in decision space) are not connected, localsearch becomes difficult. Starting within an obtained efficient region, ‘gaps’ of nonefficientalternatives have to be crossed to reach another efficient region.

Let us consider the following test problem where the multiple efficient regions are aconsequence of a restriction function with multiple local optima:

(P4) minf (x), f : R2→ R2 with fi : x 7→ xi for i ∈ {1, 2}and

x ∈ X = {x ∈ R2 : x1 ≥ 0, x2 ≥ 0, x2− 5+ 0.5x1 sin(4x1) ≥ 0}.Figure 4 shows the results of the application of a MOEA to problem (P4). The populations ofgenerationst = 10, 50, 100, 500 are represented in two-dimensional decision space. Also

Figure 4. Results of a MOEA for (P4).

Page 11: Global Multiobjective Optimization Using Evolutionary Algorithms

GLOBAL MULTIOBJECTIVE OPTIMIZATION 357

the points for which the third restriction becomes strict are shown. These elements define asuperset of the efficient frontier. The efficient set consists of seven separated regions. TheMOEA generates points which represent all of these regions. Albeit that in two of themno points of generationt = 500 can be found. This ‘extinction’ of efficient regions can beexplained by the random process of selection when more thanµ points are efficient, esp.when the efficient region is small. On the other hand, it is difficult for points from otherregions to invade an empty one if the gaps between regions are large (compared with themutation rates/step sizes). A variation of step sizes and a larger population can, however,reduce the dangers of efficient regions becoming empty.

5.5. Application to MOP with multiple locally efficient regions

Let us now consider an even more difficult problem where the objective functions are notmonotonous (e.g., because they are not continuous). Then there may exist locally efficientsolutions which are not globally efficient. The property of local efficiency can be regardedas a generalization of the property of local optimality. A locally efficient alternative canbe similarly defined as a locally optimal alternative:x is calledlocally efficientif thereexists a neighborhoodUε(x) := {y ∈ X : |x − y| < ε} (for a given norm|.|) such thatx isefficient inUε(x). Like local optima in scalar optimization, locally efficient solutions canlead to algorithmic problems, i.e. that hill climbing algorithms get stuck in such non-global‘optima’.

Let us consider a (nonconvex) MOP with multiple locally (but not globally) efficientregions:

(P5) minf (x), f : R2→ R2 with

f1 : x 7→ int(x1)+ 0.5+ (x1− int(x1))sin(2π(x2− int(x2))),

f2 : x 7→ int(x2)+ 0.5+ (x1− int(x1))cos(2π(x2− int(x2))),

and

x ∈ X = {x ∈ R2 : x1 ≥ 0, x2 ≥ 0, x1+ x2 ≥ 5}.

Figure 5 shows the objective functionsf1 and f2, each one with one of its argumentsx1 orx2 fixed.

This problem is based on a partition of the decision space into squares [i, i + 1)×[ j, j + 1), i, j ∈ N, which are mapped using a ‘rectangle into polar’-type of co-ordinatetransformation. Because of theint-Operator (which calculates the integer part of its argu-ment) the objective functions are not continuous. They are also not monotonous such thatlocally efficient regions are created which are not (globally) efficient:

The squares are mapped onto circles with radius 1. The efficient set (in objective space)of such a (closed) circle, the lower left part of its border, is the image of{1}× [0.25, 0.5] ⊂[0, 1]× [0, 1]. For each [i, i + 1)× [ j, j + 1), i, j ∈ N, in our problem the set{i + 1} ×[ j + 0.25, j + 0.5] is the locally efficient set of its closure because all neighboring vectorsfrom the square [i + 1, i + 2)× [ j, j + 1) are dominated by them.

Page 12: Global Multiobjective Optimization Using Evolutionary Algorithms

358 HANNE

Figure 5. The objective functions for fixedx1 or x2.

Figure 6. Results of a MOEA for (P5).

Figure 6 shows the solutions of (P5) generated by a MOEA for several generations (asabove). The circle borders of the squares which build up a superset of the efficient frontierare also shown in figure 6. Obviously, the MOEA can cope quite well with the problem sincein all efficient regions we can find generated points. Starting with an alternative(10, 10)τ

for the initialization of the first generation, several locally (but not globally) efficient regionshave been crossed.

Page 13: Global Multiobjective Optimization Using Evolutionary Algorithms

GLOBAL MULTIOBJECTIVE OPTIMIZATION 359

6. Conclusions

In this paper we have discussed the application of an evolutionary algorithm to multiob-jective programming problems. The proposed multiobjective evolutionary algorithm is ageneralization of a scalar EA implemented within the framework of the Learning Object-Oriented Problem Solver. We have discussed several application examples for which theMOEA is capable to generate a population which represents an approximation of the effi-cient set quite well.

The approach is especially interesting for global or nonconvex nonlinear multiobjectiveproblems which cannot be analyzed with many algorithms proposed for multiobjective pro-gramming. The examples have shown that evolutionary algorithms have interesting prop-erties which can cope with concave regions or disconnected efficient sets. Here, however,more research should be done. Some theoretical results on convergence and reachability ofsolutions are given by Hanne (1999). For further test results, especially the comparison ofdifferent algorithms, it would be worthwhile to set up a data base of such problems whichwould allow to compare different procedures for such problems.

Moreover, for practical usage of this procedure it would be necessary to integrate it withcommon MCDM software for calculating ‘compromise solutions’. Here, it might also beuseful to integrate additional tools like filters (see Steuer and Harris, 1980) which couldimprove the representation of the efficient set by the generated alternatives.

References

Back, T., F. Hoffmeister, and H.-P. Schwefel. (1991). “A Survey of Evolution Strategies.” In R.K. Belew and L.B.Booker (eds.),Genetic Algorithms, Proceedings of the Fourth International Conference. San Mateo: MorganKaufmann, pp. 2–9.

Back, T. and H.-P. Schwefel. (1992). “Evolutionary Algorithms: Some Very Old Strategies for Optimization andAdaptation.” In D. Perret-Gallix (ed.),New Computing Techniques in Physics Research II. Singapore: WorldScientific, pp. 247–254.

Back, T. and H.-P. Schwefel. (1993). “An Overview of Evolutionary Algorithms for Parameter Optimization.”Evolutionary Computation1(1), 1–23.

Fonseca, C.M. and P.J. Fleming. (1993). “Genetic Algorithms for Multiobjective Optimization: Formulation,Discussion and Generalization.” In S. Forrest (ed.),Genetic Algorithms: Proceedings of the Fifth InternationalConference. San Mateo: Morgan Kaufmann, pp. 416–423.

Fonseca, C.M. and P.J. Fleming. (1995). “An Overview of Evolutionary Algorithms in Multiobjective Optimiza-tion.” Evolutionary Computation3(1), 1–16.

Gal, T. (1977). “A General Method for Determining the Set of All Efficient Solutions to a Linear VectormaximumProblem.”European Journal of Operations Research1, 307–322.

Gal, T. (1986). “On Efficient Sets in Vector Maximum Problems—A Brief Survey.”European Journal of OperationsResearch24, 253–264.

Gal, T. (1995).Postoptimal Analyses, Parametric Programming, and Related Topics. Degeneracy, MulticriteriaDecision Making, Redundancy. 2nd edn. Berlin: De Gruyter.

Goldberg, D.E. (1989).Genetic Algorithms in Search, Optimization, and Machine Learning. Reading:Addison-Wesley.

Hanne, T. (1993). “An Object-Oriented Decision Support System for MCDM.” InOperations Research Proceed-ings DGOR/NSOR 22nd Annual Meeting. Berlin: Springer, pp. 449–455.

Hanne, T. (1997a). “Decision Support for MCDM that is Neural Network-Based and can Learn.” In J. Climaco(ed.),Multicriteria Analysis, Proceedings of the XIth International Conference on MCDM, Coimbra, Aug. 1–6,1994. Berlin: Springer, pp. 401–410.

Page 14: Global Multiobjective Optimization Using Evolutionary Algorithms

360 HANNE

Hanne, T. (1997b). “Concepts of a Learning Object-Oriented Problem Solver (LOOPS).” In G. Fandel and T.Gal, in collaboration with T. Hanne (eds.),Multiple Criteria Decision Making, Proceedings of the TwelfthInternational Conference, Hagen 1995. Berlin: Springer, pp. 330–339.

Hanne, T. (1999). “On the Convergence of Multiobjective Evolutionary Algorithms.”European Journal ofOperational Research117(3), 553–564.

Hoffmeister, F. and T. B¨ack. (1992). “Genetic Algorithms and Evolution Strategies: Similarities and Differences.”Technical Report No. SYS-1/92, University of Dortmund, Department of Computer Science.

Holland, J.H. (1975).Adaptation in Natural and Artificial Systems. Ann Arbor: University of Michigan Press.Horn, J. (1997). “Multicriterion Decision Making.” In T. B¨ack, D.B. Fogel, and Z. Michalewicz (eds.),Handbook

of Evolutionary Computation.New York and Bristol: IOP Publishing and Oxford University Press, pp. F1.9:1–F1.9:15.

Jahn, J. (1984). “Scalarization in Vector Optimization.”Mathematical Programming29, 203–218.Koza, J.R. (1992).Genetic Programming: On the Programming of Computers by Means of Natural Selection.

Cambridge, Massachussetts: MIT Press.Kursawe, F. (1991). “A Variant of Evolution Strategies for Vector Optimization.” In H.-P. Schwefel and R.

Manner (eds.),Parallel Problem Solving from Nature, 1st Workshop, PPSN 1, Oct. 1–3, 1990. Berlin: Springer,pp. 193–197.

Kursawe, F. (1992). “Evolution Strategies for Vector Optimization.” InProceedings of the Tenth InternationalConference on Multiple Criteria Decision Making, Taipei, Vol. III, pp. 187–193.

Michalewicz, Z. (1994).Genetic Algorithms + Data Structures= Evolution Programs, 2nd edn. Berlin: Springer.Nakayama, H. (1997). “Some Remarks on Trade-Off Analysis in Multi-Objective Programming.” In J. Climaco

(ed.),Multicriteria Analysis, Proceedings of the XIth International Conference on MCDM, Aug. 1–6, 1994,Coimbra, Portugal. Berlin: Springer, pp. 179–190.

Rechenberg, I. (1973).Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischenEvolution. Stuttgart: Frommann-Holzboog.

Sawaragi, Y., H. Nakayama, and T. Tanino. (1985).Theory of Multiobjective Optimization. Orlando: AcademicPress.

Schwefel, H.-P. (1977).Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie. Basel:Birkhauser.

Schwefel, H.-P. (1981).Numerical Optimization of Computer Models. Chichester: Wiley.Steuer, R.E. (1986).Multiple Criteria Optimization: Theory, Computation, and Application. New York: John

Wiley & Sons.Steuer, R.E. and F.W. Harris. (1980). “Intra-Set Point Generation and Filtering in Decision and Criterion Space.”

Computers and Operations Research7, 41–53.Tamaki, H., H. Kita, and S. Kobayashi. (1996). “Multi-Objective Optimization by Genetic Algorithms: A review.”

In Proceedings of the 3rd IEEE International Conference on Evolutionary Computation. Piscataway (NJ): IEEEPress, pp. 517–522.

Vincke, P. (1992).Multicriteria Decision-Aid. Chichester: Wiley.White, D.J. (1982).Optimality and Efficiency. Chichester: Wiley.Zeleny, M. (1982).Multiple Criteria Decision Making. New York: McGraw-Hill.