multi-objective optimization · multi-objective optimization metaheuristics evolutionary algorithms...

28
Multi-objective Optimization Carlos A. Coello Coello Contents Introduction .................................................................. 2 Basic Concepts ................................................................ 3 Pareto Optimality ............................................................ 3 Multi-objective Evolutionary Algorithms .......................................... 4 Multi-objective Evolutionary Algorithms .......................................... 6 Recent Algorithmic Trends .................................................... 9 Use of Other Metaheuristics ..................................................... 10 Artificial Immune Systems (AIS) ............................................... 10 Particle Swarm Optimization (PSO) ............................................. 11 Ant Colony Optimization (ACO) ............................................... 11 Other Metaheuristics ......................................................... 12 Some Applications ............................................................. 12 Some Current Challenges ....................................................... 13 Conclusions .................................................................. 15 Cross-References .............................................................. 16 References ................................................................... 16 Abstract This chapter provides a short overview of multi-objective optimization us- ing metaheuristics. The chapter includes a description of some of the main metaheuristics that have been used for multi-objective optimization. Although special emphasis is made on evolutionary algorithms, other metaheuristics, such as particle swarm optimization, artificial immune systems, and ant colony optimization, are also briefly discussed. Other topics such as applications and C. A. Coello Coello () Departmento de Computación, CINVESTAV-IPN, México D.F., México e-mail: [email protected] © Springer International Publishing AG 2018 R. Martí et al. (eds.), Handbook of Heuristics, https://doi.org/10.1007/978-3-319-07153-4_17-1 1

Upload: others

Post on 22-Jul-2020

34 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization

Carlos A. Coello Coello

Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Pareto Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Multi-objective Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Multi-objective Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Recent Algorithmic Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Use of Other Metaheuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Artificial Immune Systems (AIS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Particle Swarm Optimization (PSO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Ant Colony Optimization (ACO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Other Metaheuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Some Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Some Current Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Abstract

This chapter provides a short overview of multi-objective optimization us-ing metaheuristics. The chapter includes a description of some of the mainmetaheuristics that have been used for multi-objective optimization. Althoughspecial emphasis is made on evolutionary algorithms, other metaheuristics,such as particle swarm optimization, artificial immune systems, and ant colonyoptimization, are also briefly discussed. Other topics such as applications and

C. A. Coello Coello (�)Departmento de Computación, CINVESTAV-IPN, México D.F., Méxicoe-mail: [email protected]

© Springer International Publishing AG 2018R. Martí et al. (eds.), Handbook of Heuristics,https://doi.org/10.1007/978-3-319-07153-4_17-1

1

Page 2: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

2 C. A. Coello Coello

recent algorithmic trends are also included. Finally, some of the main researchtrends that are worth exploring in this area are briefly discussed.

KeywordsMulti-objective optimization � metaheuristics � evolutionary algorithms �optimization

Introduction

Metaheuristics have been widely used for solving different types of optimizationproblems (see, e.g., [88, 109, 198]).

One particular class of optimization problems involves having two or more(often conflicting) objectives which we aim to optimize at the same time. In fact,such problems, which are called “multi-objective” are quite common in real-worldapplications, and their solution has triggered an important amount of work withinOperations Research [135].

During the last 40 years, a large number of mathematical programming tech-niques have been developed to solve certain specific classes of multi-objectiveoptimization problems. However, such techniques have a relatively limited ap-plicability (e.g., some of them require the first or even the second derivativeof the objective functions and the constraints; others can only deal with convexPareto fronts, etc.). Such limitations have motivated the development of alternativeoptimization methods, from which metaheuristics have become a very popularalternative [43].

From the many metaheuristics currently available, evolutionary algorithmshave been, without doubt, the most popular choice for dealing with any sort ofoptimization problem [21, 85], and multi-objective optimization is, by no means,an exception. Thus, in this chapter, we will focus our discussion mainly on the useof evolutionary algorithms for solving multi-objective optimization problems.

The use of evolutionary algorithms for solving multi-objective optimizationproblems was originally hinted in 1967 [173], but the first actual implementationof what is now called a “multi-objective evolutionary algorithm (MOEA)” wasnot produced until the mid-1980s [178, 179]. However, this area, which is nowcalled “evolutionary multi-objective optimization,” or EMO, has experienced avery important growth, mainly in the last 20 years [41, 42, 52, 201]. The authormaintains the EMOO repository, which, as of December 8, 2017, contains over11,000 bibliographic entries, as well as public-domain implementation of someof the most popular MOEAs. The EMOO repository is located at https://emoo.cs.cinvestav.mx/.

The remainder of this chapter is organized as follows: In section “BasicConcepts”, we provide some basic concepts related to multi-objective optimization,which are required to make this chapter self-contained. The use of evolutionaryalgorithms in multi-objective optimization is motivated in section “Multi-objectiveEvolutionary Algorithms”. A short discussion on other bio-inspired metaheuristics

Page 3: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 3

that have also been used for multi-objective optimization is provided in section “Useof Other Metaheuristics”. Some of the main research topics which are currentlyattracting a lot of attention in the EMO field are briefly discussed in section“Multi-objective Evolutionary Algorithms”. A set of sample applications of MOEAsis provided in section “Some Applications”. Some of the main topics of researchin the EMO field that currently attract a lot of attention are briefly discussedin section “Some Current Challenges”. Such topics include the use of othermetaheuristics. Finally, some conclusions are provided in section “Conclusions”.

Basic Concepts

In this chapter, we focus on the solution of multi-objective optimization problems(MOPs) of the form:

minimize�f1.Ex/; f2.Ex/; : : : ; fk.Ex/

�(1)

subject to the m inequality constraints

gi .Ex/ � 0 i D 1; 2; : : : ; m (2)

and the p equality constraints

hi .Ex/ D 0 i D 1; 2; : : : ; p (3)

where k is the number of objective functions fi W Rn ! R. We call Ex D

Œx1; x2; : : : ; xn�T the vector of decision variables. We wish to determine from among

the set F of all vectors which satisfy (2) and (3) the particular set of valuesx�1 ; x

�2 ; : : : ; x

�n which yield the optimum values of all the objective functions.

Pareto Optimality

It is rarely the case that there is a single point that simultaneously optimizes allthe objective functions. In fact, this situation only arises when there is no conflictamong the objectives, which would make unnecessary the development of specialsolution methods, since this single solution could be reached after the sequentialoptimization of all the objectives, considered separately. Therefore, we normallylook for “trade-offs,” rather than single solutions when dealing with multi-objectiveoptimization problems. The notion of “optimality” normally adopted in this case isthe one originally proposed by Francis Ysidro Edgeworth [63] and later generalizedby the French economist Vilfredo Pareto [152]. Although some authors call thisnotion Edgeworth-Pareto optimality, we will use the most commonly adopted term:Pareto optimality.

Page 4: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

4 C. A. Coello Coello

We say that a vector of decision variables Ex� 2 F (i.e., a feasible solution) isPareto optimal if there does not exist another Ex 2 F such that fi .Ex/ � fi .Ex

�/

for all i D 1; : : : ; k and fj .Ex/ < fj .Ex�/ for at least one j (assuming that all the

objectives are being minimized).In words, this definition says that Ex� is a Pareto optimal solution if there exists

no feasible vector of decision variables Ex 2 F which would decrease some criterionwithout causing a simultaneous increase in at least one other criterion. Assumingthe inherent conflict normally present among (at least some) objectives, the use ofthis concept normally produces several solutions. Such solutions constitute the so-called Pareto optimal set. The vectors Ex� corresponding to the solutions includedin the Pareto optimal set are called nondominated. The image of the Pareto optimalset under the objective functions (i.e., the objective function values correspondingto the decision variables contained in the Pareto optimal set) is called Pareto front.

Multi-objective Evolutionary Algorithms

The core ideas related to the development of search techniques that simulate themechanism of natural selection (Darwin’s survival of the fittest principle) can betraced back to the early 1930s [75]. However, the three main techniques based onthis notion were developed during the 1960s: genetic algorithms [101], evolutionstrategies [184], and evolutionary programming [74]. These approaches, which arenow generically denominated “evolutionary algorithms,” have been found to be veryeffective for solving single-objective optimization problems [76, 89, 185].

The basic operation of an evolutionary algorithm (EA) is described next. First,a set of potential solutions (called “population”) to the problem being solved israndomly generated. Each solution in the population (called “individual”) encodesall the decision variables of the problem (i.e., each individual contains all thedecision variables of the problem to be solved). The user needs to define a measureof performance for each of the solutions. Such a measure of performance is called“fitness function” and will allow us to know how good is a solution with respect tothe others. Such a fitness function is normally a variation of the objective functionof the problem that we wish to solve (e.g., the objective function that we aimto optimize). Then, a selection mechanism must be applied in order to decidewhich individuals will “mate.” This selection process is generally stochastic andis normally based on the fitness contribution of each individual (i.e., the fittestindividuals have a higher probability of being selected). Upon mating, a set of“offspring” (or children) are generated. Such offspring are “mutated” (this operatorproduces a small random change, with a low probability, on the contents of anindividual) and constitute the new population to be evaluated at the next iteration(each iteration is called a “generation”). This process is repeated until reaching astopping condition (normally, a maximum number of generations defined by theuser) [64].

Page 5: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 5

The main motivation for using EAs for solving multi-objective optimizationproblems relies on their population-based nature, which allows them to generate(if properly manipulated) several elements of the Pareto optimal set, in a singlerun. In contrast, mathematical programming techniques normally generate a singleelement of the Pareto optimal set per run. Additionally, the so-called multi-objectiveevolutionary algorithms (MOEAs) are less susceptible to the shape and continuityof the Pareto front and require less specific domain information to operate [41].

MOEAs extend a traditional (single-objective) EA in two main aspects:

• The selection mechanism: In this case, the aim is to select nondominatedsolutions and to consider all the nondominated solutions in a population to beequally good (unless there is some specific preference from the user, all theelements of the Pareto optimal set are equally good).

• A diversity maintenance mechanism: Because of stochastic noise, EAs tend toconverge to a single solution if run for a sufficiently long time [89]. In order toavoid this, it is necessary to block the selection mechanism in a MOEA, favoringthe diversity of solutions, so that several elements of the Pareto optimal set canbe generated in a single run.

Regarding selection, early MOEAs relied on the use of aggregating functions(mainly linear) [93] and relatively simple population-based approaches [179]. How-ever, such approaches have evident drawbacks (i.e., the use of linear aggregatingfunctions does not allow the generation of non-convex portions of the Pareto frontregardless of the weights combination that is adopted [49]). Toward the mid-1990s, MOEAs started to adopt variations of the so-called Pareto ranking selectionmechanism. This approach was originally proposed by David E. Goldberg in hisseminal book on genetic algorithms [89], and it consists of sorting the populationof an EA based on Pareto optimality, such that all nondominated individualsare assigned the same rank (or importance). The aim is that all nondominatedindividuals get the same probability of being selected and that such probability ishigher than the one corresponding to individuals which are dominated. Althoughconceptually simple, this sort of selection mechanism allows for a wide variety ofpossible implementations [41, 52].

Regarding diversity maintenance, a wide variety of methods have been proposedin the specialized literature to maintain diversity in a MOEA. Such approachesinclude fitness sharing and niching [53, 91], clustering [203, 230], geographicallybased schemes [120], the use of entropy [46, 117], and parallel coordinates [100],among others. In all cases, the core idea behind diversity maintenance mechanismsis to penalize solutions that are too close from each other in some space (i.e.,decision variable or objective function space or even both). Most MOEAs penalizesolutions that are too close from each other in objective function space, because itis normally aimed to have solutions well-distributed along the Pareto front.

Additionally, some researchers have proposed the use of mating restrictionschemes (which imposes rules on the individuals that can be recombined) [188,230].Furthermore, the use of relaxed forms of Pareto dominance has also become rela-

Page 6: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

6 C. A. Coello Coello

tively popular in recent years, mainly as an archiving technique which encouragesdiversity, while allowing the archive to regulate convergence (the most popular ofsuch mechanisms is, with no doubt, �-dominance [124], which has been adopted insome approaches such as �-MOEA [57]).

A third component of modern MOEAs is elitism, which normally consists ofusing an external archive (also called “secondary population”) that may (or maynot) interact in different ways with the main (or “primary”) population of the MOEAduring selection. The main purpose of this archive is to store all the nondominatedsolutions generated throughout the search process, while removing those thatbecome dominated later in the search (called local nondominated solutions). Theapproximation of the Pareto optimal set produced by a MOEA is thus the finalcontents of this archive. It is important to emphasize that the use of elitism isnot only advisable (the lack of elitism could make us lose nondominated solutionsgenerated during the search), but it is also required because of theoretical reasons(elitism is required in order to guarantee convergence of a MOEA to the Paretooptimal set as proved in [174]).

It is worth noticing that, in practice, external archives are normally bounded to acertain maximum number of solutions. This was originally done in some MOEAsthat used the external archive during the selection stage (see, e.g., [230]). In sucha case, allowing the size of the archive to grow too much dilutes the selectionpressure, which has a negative effect on the performance of the MOEA. However,most modern MOEAs bound the size of the external archive, even if the archiveis not used during the selection process, mainly because of practical reasons (thismakes it easier to compare results with respect to other MOEAs).

An important remark is that the use of a plus (+) selection is another possibleelitist mechanism. Under this sort of selection scheme, the population of parentscompetes with the population of offspring (both populations are of the same size),and then we keep only the best half. This sort of selection scheme has been relativelypopular in single-objective optimization and has been also adopted in some modernMOEAs (see, e.g., [56]), but it is less popular than the use of external archives.

Multi-objective Evolutionary Algorithms

In spite of the very large number of publications related to MOEAs that can befound in the literature, there is only a handful of algorithms that are actually usedby a significant number of researchers and/or practitioners around the world.

1. Strength Pareto Evolutionary Algorithm 2 (SPEA2): This is an updatedversion of the Strength Pareto Evolutionary Algorithm (SPEA) proposed in thelate 1990s [230] whose main features are the following. It adopts an externalarchive (called the external nondominated set), which stores the nondominatedsolutions previously generated and participates in the selection process (togetherwith the main population). For each individual in this archive, a strength value iscomputed. This strength value is proportional to the number of solutions that a

Page 7: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 7

certain individual dominates. In SPEA, the fitness of each member of the currentpopulation is computed according to the strengths of all external nondominatedsolutions that dominate it. As the size of the external nondominated set growstoo much, this significantly reduces the selection pressure and slows down thesearch. In order to avoid this, SPEA adopts a clustering technique that prunes thecontents of the external nondominated set so that its size remains below a certain(pre-defined) threshold. SPEA standardized the use of external archives as theelitist mechanism of a MOEA, although this sort of mechanism had been usedbefore by other researchers (see, e.g., [107]). SPEA2 has three main differenceswith respect to the original SPEA [231]: (1) it incorporates a fine-grained fitnessassignment strategy which takes into account, for each individual, both thenumber of individuals that dominate it and the number of individuals by which itis dominated, (2) it uses a nearest neighbor density estimation technique whichguides the search more efficiently (i.e., a more efficient clustering algorithm isadopted), and (3) it uses an enhanced archive truncation method that guaranteesthe preservation of boundary solutions (this fixes a bug from the original SPEA).

2. Pareto Archived Evolution Strategy (PAES): This is perhaps the most simpleMOEA than can be possibly designed. It was proposed by Knowles and Corne[119], and it consists of a (1+1) evolution strategy (i.e., a single parent thatgenerates a single offspring through the application of mutation to the parent)in combination with a historical archive that stores the nondominated solutionspreviously found. This archive is used as a reference set against which eachmutated individual is being compared. Such (external) archive adopts a crowdingprocedure that divides objective function space in a recursive manner. Then, eachsolution is placed in a certain grid location based on the values of its objectives(which are used as its “coordinates” or “geographical location”). A map ofsuch a grid is maintained, indicating the number of solutions that reside in eachgrid location. When a new nondominated solution is ready to be stored in thearchive, but there is no room for it (the size of the external archive is bounded),a check is made on the grid location to which the solution would belong. Ifthis grid location is less densely populated than the most densely populatedgrid location, then a solution (randomly chosen) from this heavily populatedgrid location is deleted to allow the storage of the newcomer. This aims toredistribute solutions, favoring the less densely populated regions of the Paretofront. Since the procedure is adaptive, no extra parameters are required (exceptfor the number of divisions of the objective space).

3. Nondominated Sorting Genetic Algorithm II (NSGA-II): This is a heavilyrevised version of the Nondominated Sorting Genetic Algorithm (NSGA),which was originally proposed in the mid-1990s [193] as a straightforwardimplementation of the Pareto ranking algorithm described by Goldberg in hisbook [89]. NSGA was, however, slow and produced poorer results than othernon-elitist MOEAs available in the mid-1990s, such as MOGA [77] and NPGA[103]. NSGA-II adopts a more efficient ranking procedure than its predecessor.

Page 8: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

8 C. A. Coello Coello

Additionally, it estimates the density of solutions surrounding a particularsolution in the population by computing the average distance of two points oneither side of this point along each of the objectives of the problem. This valueis the so-called crowding distance. During selection, NSGA-II uses a crowdedcomparison operator which takes into consideration both the nondomination rankof an individual in the population and its crowding distance (i.e., nondominatedsolutions are preferred over dominated solutions, but between two solutions withthe same nondomination rank, the one that resides in the less crowded regionis preferred). NSGA-II does not use an external archive as most of the modernMOEAs in current use. Instead, the elitist mechanism of NSGA-II consistsof combining the best parents with the best offspring obtained (i.e., this is a(� C �)-selection used in evolution strategies). Due to its clever mechanisms,NSGA-II is much more efficient (computationally speaking) than its predecessor,and its performance is so good that it has become very popular, triggeringa significant number of applications and becoming some sort of landmarkagainst which new MOEAs have been compared during more than 10 years, inorder to merit publication. More recently, the Nondominated Sorting GeneticAlgorithm III (NSGA-III) [54] was introduced. NSGA-III still adopts the sameNSGA-II framework (i.e., it still performs a classification of the population innondominated levels). However, its mechanism to maintain diversity is basedon the use of a number of well-spread reference points which are adaptivelyupdated. The NSGA-III does not require any additional parameters (other thanthose associated to the genetic algorithm that is used as its search engine), andit is designed to deal with many-objective optimization problems (i.e., problemshaving 4 or more objectives).

4. Pareto Envelope-based Selection Algorithm (PESA): This algorithm wasproposed by Corne et al. [44] and uses a small internal population and a largerexternal (or secondary) population. PESA adopts the same adaptive grid fromPAES to maintain diversity. However, its selection mechanism is based on thecrowding measure. This same crowding measure is used to decide what solutionsto introduce into the external population (i.e., the archive of nondominatedvectors found along the evolutionary process). Therefore, in PESA, the externalmemory plays a crucial role in the algorithm since it determines not only thediversity scheme but also the selection performed by the method. There is alsoa revised version of this algorithm, called PESA-II [45], which is identical toPESA, except for the fact that a region-based selection is used in this case.In region-based selection, the unit of selection is a hyperbox rather than anindividual. The procedure consists of selecting (using any of the traditionalselection techniques [90]) a hyperbox and then randomly selecting an individualwithin such hyperbox. The main motivation of this approach is to reduce thecomputational costs associated with traditional MOEAs (i.e., those based onPareto ranking).

5. Multi-objective Evolutionary Algorithm based on Decomposition (MOEA/D):This approach was proposed by Zhang and Li [221]. The main idea of this

Page 9: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 9

algorithm is to decompose a multi-objective optimization problem into severalscalar optimization subproblems which are simultaneously optimized. Thedecomposition process requires the use of weights, but the authors provide amethod to generate them. During the optimization of each subproblem, onlyinformation from the neighboring subproblems is used, which allows thisalgorithm to be effective and efficient. MOEA/D is generally considered oneof the most powerful MOEAs currently available, as has been evidenced byseveral comparative studies.

Recent Algorithmic Trends

Many other MOEAs have been proposed in the specialized literature (see, e.g.,[39, 57, 200, 202, 219]), but they will not be discussed here due to obvious spacelimitations. A more interesting issue, however, is to try to predict which sort ofMOEA will become predominant in the next few years.

Efficiency is, for example, a concern nowadays, and several approaches havebeen developed in order to improve the efficiency of MOEAs (see, e.g., [113]). Also,the use of fitness approximation, fitness inheritance, surrogates, and other similartechniques has become more common in recent years, which is a clear indicationof the more frequent use of MOEAs for the solution of computationally expensiveproblems (see, e.g., [118, 157, 166, 169, 177, 217, 222]).

However, the most promising research line within algorithmic design seems tobe the use of a performance measure in the selection mechanism of a MOEA. Thisresearch trend formally started with the Indicator-Based Evolutionary Algorithm(IBEA) [229], although this idea had been already formulated and used (in differentways) by other authors (see, e.g., [106, 120]). Nevertheless, the most representativeMOEA within this family is perhaps the S metric selection Evolutionary Multi-Objective Algorithm (SMS-EMOA) [19, 67]. SMS-EMOA was originally proposedby Emmerich et al. [67] and is based on NSGA-II. SMS-EMOA creates an initialpopulation, and then, it generates only one solution by iteration using the operators(crossover and mutation) of the NSGA-II. After that, it applies Pareto ranking.When the last front has more than one solution, SMS-EMOA uses the hypervolumecontribution to decide which solution will be removed. The Hypervolume (alsoknown as the S metric or the Lebesgue Measure) of a set of solutions measuresthe size of the portion of objective space that is dominated by those solutionscollectively. This is the only unary performance indicator which is known to bePareto compliant [232]. Beume et al. [19] proposed not to use the contribution to thehypervolume indicator when in the Pareto ranking we obtain more than one front.In that case, they proposed to use the number of solutions which dominate to onesolution (the solution that is dominated by more solutions is removed). The authorsargued that the motivation for using the hypervolume indicator is to improve thedistribution in the nondominated front and then it is not necessary in fronts differentto the nondominated front.

Page 10: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

10 C. A. Coello Coello

The hypervolume indicator has attracted a lot of attention from researchers dueto its interesting theoretical properties. For example, it has been proven that themaximization of the hypervolume is equivalent to finding the Pareto optimal set[73]. Empirical studies have shown that (for a certain number of points previouslydetermined) the maximization of the hypervolume does indeed produce subsets ofthe Pareto front which are well-distributed [67, 120].

However, there are also practical reasons for being interested in indicator-basedselection. The main one is that MOEAs such as SMS-EMOA seem to continueworking as usual as we increase the number of objectives, as opposed to Pareto-based selection mechanisms which are known to degrade quickly in problemshaving more than three objectives (this research area is known as many-objectiveoptimization). Although the reasons for the poor scalability of Pareto-based MOEAsrequires further study (see, e.g., [182]), the need for scalable selection mechanismshas triggered an important amount of research around indicator-based MOEAs.The main drawback of adopting the hypervolume in the selection mechanismof a MOEA is its extremely high computational cost. One possible alternativefor dealing with this high computational cost is to estimate the hypervolumecontribution. However, MOEAs designed around this idea (e.g., HyPE [7]) seemto have a poor performance with respect to those that adopt exact hypervolumecontributions. An alternative is to rely on other performance indicators. In thisregard, several researchers have proposed the design of selection mechanisms basedon performance measures such as �p [168, 183] and R2 [24, 94, 99, 154]. The useof the maximin [10, 132] is another intriguing alternative, as this expression seemsto be equivalent to the use of the �-indicator [233].

Use of Other Metaheuristics

A wide variety of other bio-inspired metaheuristics have become popular in thelast few years for solving optimization problems [43]. Multi-objective extensions ofmany of these metaheuristics already exist [41], but few efforts have been made toactually exploit the main specific features of each of them. There are also few effortsin trying to understand the types of problems in which each of these metaheuristicscan be more suitable.

Next, we briefly review three popular bio-inspired metaheuristics that are goodcandidates for being used as multi-objective optimizers, but many other choices alsoexist (see, e.g., [41]).

Artificial Immune Systems (AIS)

Our natural immune system has provided a fascinating metaphor for developing anew bio-inspired metaheuristic. Indeed, from a computational point of view, ourimmune system can be considered as a highly parallel intelligent system that isable to learn and retrieve previously acquired knowledge (i.e., it has “memory”),

Page 11: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 11

when solving highly complex recognition and classification tasks. This motivatedthe development of the so-called artificial immune systems (AISs) during the early1990s [50, 51].

AISs were identified in the early 1990s as a useful mechanism to maintaindiversity in the context of multimodal optimization [78, 191]. Smith et al. [192]showed that fitness sharing can emerge when their emulation of the immune systemis used. Furthermore, the approach that they proposed turns out to be more efficient(computationally speaking) than traditional fitness sharing [53], and it does notrequire additional information regarding the number of niches to be formed.

Over the years, a wide variety of multi-objective extensions of AISs have beenproposed (see, e.g., [27, 32, 38, 79, 130, 155]). However, most of the algorithmictrends in MOEAs have had a delayed arrival in multi-objective AISs. Also, the highpotential of multi-objective AISs for pattern recognition and classification tasks hasbeen scarcely exploited.

For more information on multi-objective AISs, the interested reader is referredto [28, 80].

Particle Swarm Optimization (PSO)

This metaheuristic is inspired on the movements of a flock of birds seeking food,and it was originally proposed in the mid-1990s [116]. In the particle swarmoptimization algorithm, the behavior of each particle (i.e., individual) is affectedby either the best local (within a certain neighborhood) or the best global (i.e.,with respect to the entire swarm or population) individual. PSO allows particlesto benefit from their past experiences (a mechanism that does not exist in traditionalevolutionary algorithms) and uses neighborhood structures that can regulate thebehavior of the algorithm.

The similarity of PSO with EAs has made possible the quick developmentof an important number of multi-objective variants of this metaheuristic (see,e.g., [40, 139, 142, 153, 165, 216]). Unlike AISs, most algorithmic trends adoptedwith MOEAs have quickly been incorporated into multi-objective particle swarmoptimizers (MOPSOs). Nevertheless, few PSO models have been used for multi-objective optimization, and the study of the specific features that could makeMOPSOs advantageous over other metaheuristics in some specific classes ofproblems is still a pending task.

For more information on MOPSOs, the interested reader is referred to [11, 37,41, 61, 167].

Ant Colony Optimization (ACO)

This metaheuristic was inspired on the behavior observed in colonies of realants seeking for food. Ants deposit a chemical substance on the ground, calledpheromone [60], which influences the behavior of the ants: they tend to take those

Page 12: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

12 C. A. Coello Coello

paths in which there is a larger amount of pheromone. Therefore, pheromone trailscan be seen as an indirect communication mechanism used by the ants (which can beseen as agents that interact to solve complex tasks). This interesting behavior of antsgave rise to a metaheuristic called ant system, which was originally applied to thetraveling salesperson problem. Nowadays, the several variations of this algorithmthat have been developed over the years are collectively denominated by ant colonyoptimization (ACO), and they have been applied to a wide variety of domains,including continuous optimization problems.

Several multi-objective versions of ACO are currently available (see, e.g., [2,4, 70, 111, 137, 140, 170]). However, the algorithmic trends developed in MOEAshave had a slow delay in being incorporated into multi-objective ACO algorithms.Additionally, the use of alternative ACO models for multi-objective optimizationhas been relatively scarce.

For more information on multi-objective ACO, the interested reader is referredto [4, 41, 82].

Other Metaheuristics

Many other metaheuristics have been extended to deal with multi-objective opti-mization problems, including simulated annealing [48,65,189,190,194], differentialevolution [134, 138, 197, 199, 204, 207, 208], tabu search [30, 86, 95, 149], scattersearch [12, 15, 16, 158, 160, 211], and artificial bee colony [33, 58, 127, 159],among many others. However, their discussion was omitted due to obvious spaceconstraints.

Some Applications

Multi-objective metaheuristics have been extensively applied to a wide variety ofdomains. Next, we will provide a short list of sample applications classified in threelarge groups: (1) engineering, (2) industrial, and (3) scientific. Specific areas withineach of these large groups are also identified.

By far, engineering applications are the most popular in the current literature onmulti-objective metaheuristics. This is not surprising if we consider that engineeringdisciplines normally have problems with better understood mathematical models. Arepresentative sample of engineering applications is the following:

• Electrical engineering [161, 176]• Hydraulic engineering [186, 196]• Structural engineering [218, 224]• Aeronautical engineering [6, 172]• Robotics [115, 220]• Control [123, 131]• Telecommunications [105, 141]

Page 13: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 13

• Civil engineering [145, 214]• Transport engineering [8, 223]

Industrial applications are the second most popular in the literature on multi-objective metaheuristics. A representative sample of industrial applications of multi-objective metaheuristics is the following:

• Design and manufacture [9, 17, 228]• Scheduling [26, 29]• Management [34, 69]

Finally, there are several publications devoted to scientific applications. Forobvious reasons, computer science applications are the most popular in the literatureon multi-objective metaheuristics. A representative sample of scientific applicationsis the following:

• Chemistry [13, 71, 126, 227]• Physics [14, 171]• Medicine [97, 122]• Computer science [146, 175]

This sample of applications should give at least a rough idea of the increasinginterest of researchers for adopting multi-objective metaheuristics in practically allkinds of disciplines.

Some Current Challenges

The existence of challenging, but solvable problems, is a key issue to preservethe interest in a research discipline. Although multi-objective optimization usingmetaheuristics is a discipline in which a very important amount of research hasbeen conducted, mainly within the last 15 years, several interesting problems stillremain open. Additionally, the research conducted so far has also led to new andintriguing topics. The following is a small sample of open problems that currentlyattract a significant amount of research within this area:

• Scalability: Although multi-objective metaheuristics have been commonly usedfor a wide variety of applications, they have certain limitations. For example, asindicated before, selection mechanisms based on Pareto optimality are known todegrade quickly as we increase the number of objectives. The reason is that, aswe increase the number of objectives, unless there is a significant increase in thepopulation size, all the individuals will quickly become nondominated, whichwill cause stagnation (i.e., no individual will be better than the others, whichmakes the selection mechanism totally useless). In fact, there is experimentalevidence that indicates that, when dealing with ten objectives, random sampling

Page 14: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

14 C. A. Coello Coello

performs better than Pareto-based selection [121]. There are alternative rankingtechniques that can be used to deal with these problems having four or moreobjectives [83, 129], but it is also possible to use relaxed forms of Paretodominance [59, 72], dimensionality reduction techniques [23, 128], or indicator-based selection mechanisms [19, 229]. It is worth indicating that scalability indecision variable space (i.e., the capability of multi-objective metaheuristics fordealing with large-scale problems) has been scarcely studied in the specializedliterature (see, e.g., [5, 62]).

• Incorporation of user’s preferences: It is normally the case that the user doesnot need the entire Pareto front of a problem but only a certain portion of it. Forexample, solutions lying at the extreme parts of the Pareto front are normallyunnecessary since they represent the best value for one objective, but the worstfor the others. Thus, if the user has at least a rough idea of the sort of trade-offs that he/she aims to find, it is desirable to be able to explore in more detailonly the nondominated solutions within the neighborhood of such trade-offs.This is possible, if we use, for example, biased versions of Pareto ranking [47]or some multi-criteria decision- making technique, from the many developedin Operations Research [22, 35, 150]. In spite of the importance of preferenceincorporation in real-world applications, the use of these schemes in multi-objective metaheuristics is still relatively scarce [31, 68, 81, 110].

• Dealing with expensive problems: There is an important number of real-world multi-objective problems for which a single evaluation of an objectivefunction may take several minutes, hours, or even days [6]. Evidently, suchproblems require special techniques to be solvable using MOEAs [177]. Themain approaches that have been developed in this area can be roughly dividedinto three main groups:

1. Use of parallelism: This is the most obvious approach given the currentaccess to low-cost parallel architectures (e.g., GPUs [18, 187]). It is worthnoting, however, that in spite of the existence of interesting proposals (see,e.g., [3, 136]), the basic research in this area has remained scarce, since mostpublications involving parallel MOEAs focus on specific applications or onparallel extensions of specific MOEAs, but rarely involve basic research.

2. Surrogates: In this case, knowledge of past evaluations of a MOEA isused to build an empirical model that approximates the fitness functions tobe optimized. This approximation can then be used to predict promisingnew solutions at a smaller evaluation cost than that of the original problem[114, 118]. Current functional approximation models include polynomials(response surface methodologies [162]), neural networks (e.g., multilayerperceptrons (MLPs) [102, 108, 156]), radial-basis function (RBF) networks[147, 205, 213], support vector machines (SVMs) [1, 20], Gaussian processes[25, 206], and Kriging [66, 163] models. The use of statistical modelsdeveloped in the mathematical programming literature in combination with

Page 15: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 15

MOEAs is also an interesting alternative (see, e.g., the multi-objective P-algorithm [151, 226]).

3. Fitness approximation: The idea of these approaches is to predict the fitnessvalue of an individual without performing its actual evaluation. From theseveral techniques available, fitness inheritance has been the most popular inmulti-objective optimization [164], but the research in this area has remainedscarce.

• Theoretical foundations: Although an important effort has been made in recentyears to provide a solid theoretical foundation to this field, a lot of work is stillrequired, and the work done in the Operations Research community can providesome useful hints (see, e.g., [151, 225]). The most relevant theoretical work inthis area includes topics such as convergence [174, 209], archiving [180, 181],algorithm complexity [143, 144], and run-time analysis [87, 125].

• Constraint-handling: The use of multi-objective optimization concepts to han-dle constraints in single-objective evolutionary algorithms has been a relativelypopular research area (see, e.g., [84,98,133,195,210,215]). In contrast, however,the design of constraint-handling mechanisms for MOEAs has not been thatpopular (see, e.g., [92, 96, 104, 112, 148, 212]). This is remarkable, given theimportance of constraints in real-world applications, but this may be due tothe fact that some modern MOEAs (e.g., NSGA-II) already incorporated aconstraint-handling mechanism. Additionally, the use of benchmarks containingconstrained test problems (e.g., [55]) has not been very popular in the specializedliterature. Most of the current research in this area has focused on extendingthe Pareto optimality relation in order to incorporate constraints (e.g., givingpreference to feasibility over dominance, such that an infeasible solution isdiscarded even if it is nondominated). Also, the use of penalty functions that“punish” a solution for not being feasible are easy to incorporate into a MOEA[36]. However, topics such as the design of constraint-handling mechanismsfor dealing with equality constraints, the design of scalable test functions thatincorporate constraints of different types (linear, nonlinear, equality, inequality),and the study of mechanisms that allow an efficient exploration of constrainedsearch spaces in MOPs remain practically unexplored.

Conclusions

In this chapter, we have provided some basic concepts related to multi-objectiveoptimization using metaheuristics. This overview has included basic conceptsrelated to multi-objective optimization in general, as well as some algorithmicdetails, with a particular emphasis on multi-objective evolutionary algorithms.

Some of the recent algorithmic trends have also been discussed, and some sampleapplications have been addressed. In general, breadth has been favored over depth,but a significant number of bibliographic references are provided for those interestedin gaining an in-depth knowledge of any of the topics discussed herein.

Page 16: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

16 C. A. Coello Coello

The information provided in this chapter aims to serve as a general overview ofthe field and to motivate the interest of the reader for pursuing research in this area.As could be seen, several research opportunities are still available for newcomers.

Cross-References

�Ant Systems�Evolutionary Algorithms�General Concepts of Metaheuristics�Genetic Algorithms� Implementation of Metaheuristics�Metaheuristics� Particle Swarm Methods�Theoretical Analysis of Stochastic Search Algorithms

Acknowledgements The author acknowledges support from CONACYT project no. 221551.

References

1. Abboud K, Schoenauer M (2002) Surrogate deterministic mutation. In: Collet P, Fonlupt C,Hao J-K, Lutton E, Schoenauer M (eds) Artificial evolution, 5th international conference,evolution artificielle, EA 2001. Lecture notes in computer science, vol 2310. Springer, LeCreusot, pp 103–115

2. Akay B (2013) Synchronous and asynchronous Pareto-based multi-objective Artificial BeeColony algorithms. J Glob Optim 57(2):415–445

3. Alba E, Luque G, Nesmachnow S (2013) Parallel metaheuristics: recent advances and newtrends. Int Trans Oper Res 20(1):1–48

4. Angus D, Woodward C (2009) Multiple objective ant colony optimisation. Swarm Intell3(1):69–85

5. Antonio LM, Coello Coello CA (2013) Use of cooperative coevolution for solving large scalemultiobjective optimization problems. In: 2013 IEEE congress on evolutionary computation(CEC’2013), Cancún. IEEE Press, pp 2758–2765. ISBN:978-1-4799-0454-9

6. Arias-Montaño A, Coello Coello CA, Mezura-Montes E (2012) Multi-objective evolutionaryalgorithms in aeronautical and aerospace engineering. IEEE Trans Evol Comput 16(5):662–694

7. Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolume-based many-objectiveoptimization. Evol Comput 19(1):45–76. Spring

8. Bai Q, Labi S, Sinha KC (2012) Trade-off analysis for multiobjective optimization intransportation asset management by generating Pareto frontiers using extreme points non-dominated sorting genetic algorithm II. J Trans Eng-ASCE 138(6):798–808

9. Balesdent M, Berend N, Depince P, Chriette A (2012) A survey of multidisciplinary designoptimization methods in launch vehicle design. Struct Multidiscip Optim 45(5):619–642

10. Balling R, Wilson S (2001) The maximin fitness function for multi-objective evolutionarycomputation: application to city planning. In: Spector L, Goodman ED, Wu A, Langdon WB,Voigt H-M, Gen M, Sen S, Dorigo M, Pezeshk S, Garzon MH, Burke E (eds) Proceedings ofthe genetic and evolutionary computation conference (GECCO’2001), San Francisco. MorganKaufmann Publishers, pp 1079–1084

Page 17: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 17

11. Banks A, Vincent J, Anyakoha C (2008) A review of particle swarm optimization. II: hybridi-sation, combinatorial, multicriteria and constrained optimization, and indicative applications.Nat Comput 7(1):109–124. Unconventional Computation 2006, Selected Papers

12. Baños R, Gil C, Reca J, Martínez J (2009) Implementation of scatter search for multi-objective optimization: a comparative study. Comput Optim Appl 42(3):421–441

13. Baronas R, Žilinskas A, Litvinas L (2016) Optimal design of amperometric biosensorsapplying multi-objective optimization and decision visualization. Electrochim Acta 211:586–594

14. Bartolini R, Apollonio M, Martin IPS (2012) Multi-objective genetic algorithm optimizationof the beam dynamics in linac drivers for free electron lasers. Phys Rev Spec Top AccelBeams 15(3). Article number:030701

15. Beausoleil RP (2006) “MOSS” multiobjective scatter search applied to non-linear multiplecriteria optimization. Eur J Oper Res 169(2):426–449

16. Beausoleil RP (2008) “MOSS-II” Tabu/Scatter search for nonlinear multiobjective opti-mization. In: Siarry P, Michalewicz Z (eds) Advances in metaheuristic methods for hardoptimization. Springer, Berlin, pp 39–67. ISBN:978-3-540-72959-4

17. Benyoucef L, Xie X (2011) Supply chain design using simulation-based NSGA-II approach.In: Wang L, Ng AHC, Deb K (eds) Multi-objective evolutionary optimisation for productdesign and manufacturing. Springer, London, pp 455–491. ISBN:978-0-85729-617-7. Chap-ter 17

18. Bernardes de Oliveira F, Davendra D, Gadelha Guimar aes F (2013) Multi-objectivedifferential evolution on the GPU with C-CUDA. In: Snášel V, Abraham A, Corchado ES(eds) Soft computing models in industrial and environmental applications, 7th internationalconference (SOCO’12). Advances in intelligent systems and computing, vol 188. Springer,Ostrava, pp 123–132

19. Beume N, Naujoks B, Emmerich M (2007) SMS-EMOA: multiobjective selection based ondominated hypervolume. Eur J Oper Res 181(3):1653–1669

20. Bhattacharya M, Lu G (2003) A dynamic approximate fitness based hybrid ea for optimizationproblems. In: Proceedings of IEEE congress on evolutionary computation, pp 1879–1886

21. Branke J (2002) Evolutionary optimization in dynamic environments. Kluwer AcademicPublishers, Boston. ISBN:0-7923-7631-5

22. Branke J (2008) Consideration of partial user preferences in evolutionary multiobjectiveoptimization. In: Branke J, Deb K, Miettinen K, Slowinski R (eds) Multiobjective optimiza-tion. Interactive and evolutionary approaches. Lecture notes in computer science, vol 5252.Springer, Berlin, pp 157–178

23. Brockhoff D, Friedrich T, Hebbinghaus N, Klein C, Neumann F, Zitzler E (2007) Do addi-tional objectives make a problem harder? In: Thierens D (ed) 2007 genetic and evolutionarycomputation conference (GECCO’2007), vol 1. ACM Press, London, pp 765–772

24. Brockhoff D, Wagner T, Trautmann H (2012) On the properties of the R2 indicator. In: 2012genetic and evolutionary computation conference (GECCO’2012). ACM Press, Philadelphia,pp 465–472. ISBN:978-1-4503-1177-9

25. Bueche D, Schraudolph NN, Koumoutsakos P (2005) Accelerating evolutionary algorithmswith gaussian process fitness function models. IEEE Trans Syst Man Cybern Part C35(2):183–194

26. Burke EK, Li J, Qu R (2012) A Pareto-based search methodology for multi-objective nursescheduling. Ann Oper Res 196(1):91–109

27. Campelo F, Guimar aes FG, Saldanha RR, Igarashi H, Noguchi S, Lowther DA, RamirezJA (2004) A novel multiobjective immune algorithm using nondominated sorting. In: 11thinternational IGTE symposium on numerical field calculation in electrical engineering,Seggauberg

28. Campelo F, Guimar aes FG, Igarashi H (2007) Overview of artificial immune systemsfor multi-objective optimization. In: Obayashi S, Deb K, Poloni C, Hiroyasu T, Murata T(eds) Evolutionary multi-criterion optimization, 4th international conference (EMO 2007),Matshushima. Lecture notes in computer science, vol 4403. Springer, pp 937–951

Page 18: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

18 C. A. Coello Coello

29. Campos SC, Arroyo JEC (2014) NSGA-II with iterated greedy for a bi-objective three-stage assembly flowshop scheduling problem. In: 2014 genetic and evolutionary computationconference (GECCO 2014), Vancouver. ACM Press, pp 429–436. ISBN:978-1-4503-2662-9

30. Carcangiu S, Fanni A, Montisci A (2008) Multiobjective Tabu search algorithms for optimaldesign of electromagnetic devices. IEEE Trans Magn 44(6):970–973

31. Carrese R, Winarto H, Li X, Sobester A, Ebenezer S (2012) A comprehensive preference-based optimization framework with application to high-lift aerodynamic design. Eng Optim44(10):1209–1227

32. Chang Y-C (2012) Multi-objective optimal SVC installation for power system loading marginimprovement. IEEE Trans Power Syst 27(2):984–992

33. Chaves-Gonzalez JM, Vega-Rodriguez MA, Granado-Criado JM (2013) A multiobjectiveswarm intelligence approach based on artificial bee colony for reliable DNA sequence design.Eng Appl Artif Intel 26(9):2045–2057

34. Chikumbo O, Goodman E, Deb K (2012) Approximating a multi-dimensional Pareto front fora land use management problem: a modified MOEA with an epigenetic silencing metaphor.In: 2012 IEEE congress on evolutionary computation (CEC’2012), Brisbane. IEEE Press,pp 480–488

35. Coello Coello CA (2000) Constraint-handling using an evolutionary multiobjective optimiza-tion technique. Civ Eng Environ Syst 17:319–346

36. Coello Coello CA (2002) Theoretical and numerical constraint-handling techniques used withevolutionary algorithms: a survey of the state of the art. Comput Methods Appl Mech Eng191(11–12):1245–1287

37. Coello Coello CA (2011) An introduction to multi-objective particle swarm optimizers.In: Gaspar-Cunha A, Takahashi R, Schaefer G, Costa L (eds) Soft computing in industrialapplications. Advances in intelligent and soft computing series, vol 96. Springer, Berlin, pp 3–12. ISBN:978-3-642-20504-0

38. Coello Coello CA, Cruz Cortés N (2005) Solving multiobjective optimization problems usingan artificial immune system. Genet Program Evolvable Mach 6(2):163–190

39. Coello Coello CA, Toscano Pulido G (2001) Multiobjective optimization using a micro-genetic algorithm. In: Spector L, Goodman ED, Wu A, Langdon WB, Voigt H-M, Gen M,Sen S, Dorigo M, Pezeshk S, Garzon MH, Burke E (eds) Proceedings of the genetic andevolutionary computation conference (GECCO’2001), San Francisco. Morgan KaufmannPublishers, pp 274–282

40. Coello Coello CA, Toscano Pulido G, Salazar Lechuga M (2004) Handling multipleobjectives with particle swarm optimization. IEEE Trans Evol Comput 8(3):256–279

41. Coello Coello CA, Lamont GB, Van Veldhuizen DA (2007) Evolutionary algorithms forsolving multi-objective problems, 2nd edn. Springer, New York. ISBN:978-0-387-33254-3

42. Collette Y, Siarry P (2003) Multiobjective optimization. Principles and case studies. SpringerBerlin, Germany. ISBN:3-540-40182-2

43. Corne D, Glover F, Dorigo M (eds) (1999) New ideas in optimization. McGraw-Hill,Berkshire. ISBN:007-709506-5

44. Corne DW, Knowles JD, Oates MJ (2000) The Pareto envelope-based selection algorithmfor multiobjective optimization. In: Schoenauer M, Deb K, Rudolph G, Yao X, Lutton E,Merelo JJ, Schwefel H-P (eds) Proceedings of the parallel problem solving from nature VIconference, Paris. Lecture notes in computer science, vol 1917. Springer, pp 839–848

45. Corne DW, Jerram NR, Knowles JD, Oates MJ (2001) PESA-II: region-based selection inevolutionary multiobjective optimization. In: Spector L, Goodman ED, Wu A, Langdon WB,Voigt H-M, Gen M, Sen S, Dorigo M, Pezeshk S, Garzon MH, Burke E (eds) Proceedings ofthe genetic and evolutionary computation conference (GECCO’2001), San Francisco. MorganKaufmann Publishers, pp 283–290

46. Cui X, Li M, Fang T (2001) Study of population diversity of multiobjective evolution-ary algorithm based on immune and entropy principles. In: Proceedings of the congresson evolutionary computation 2001 (CEC’2001), Piscataway, vol 2. IEEE Service Center,pp 1316–1321

Page 19: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 19

47. Cvetkovic D, Parmee IC (2002) Preferences and their application in evolutionary multiobjec-tive optimisation. IEEE Trans Evol Comput 6(1):42–57

48. Czyzak P, Jaszkiewicz A (1998) Pareto simulated annealing—a metaheuristic technique formultiple-objective combinatorial optimization. J Multi-Criteria Decis Anal 7:34–47

49. Das I, Dennis J (1997) A closer look at drawbacks of minimizing weighted sums of objectivesfor Pareto set generation in multicriteria optimization problems. Struct Optim 14(1):63–69

50. Dasgupta D (ed) (1999) Artificial immune systems and their applications. Springer, Berlin51. de Castro LN, Timmis J (2002) An introduction to artificial immune systems: a new

computational intelligence paradigm. Springer, London. ISBN:1-85233-594-752. Deb K (2001) Multi-objective optimization using evolutionary algorithms. Wiley, Chichester.

ISBN:0-471-87339-X53. Deb K, Goldberg DE (1989) An investigation of niche and species formation in genetic

function optimization. In: Schaffer JD (ed) Proceedings of the third international conferenceon genetic algorithms, San Mateo. George Mason University, Morgan Kaufmann Publishers,pp 42–50

54. Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm usingreference-point-based nondominated sorting approach, part I: solving problems with boxconstraints. IEEE Trans Evol Comput 18(4):577–601

55. Deb K, Pratap A, Meyarivan T (2001) Constrained test problems for multi-objectiveevolutionary optimization. In: Zitzler E, Deb K, Thiele L, Coello Coello CA, Corne D (eds)First international conference on evolutionary multi-criterion optimization. Lecture notes incomputer science, vol 1993. Springer, pp 284–298

56. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective geneticalgorithm: NSGA–II. IEEE Trans Evol Comput 6(2):182–197

57. Deb K, Mohan M, Mishra S (2005) Evaluating the �-domination based multi-objectiveevolutionary algorithm for a quick computation of Pareto-optimal solutions. Evol Comput13(4):501–525. Winter

58. Dhouib S, Dhouib S, Chabchoub H (2013) Artificial bee colony metaheuristic to find Paretooptimal solutions set for engineering design problems. In: 2013 5th international conferenceon modeling, simulation and applied optimization (ICMSAO), Hammamet. IEEE Press.ISBN:978-1-4673-5812-5

59. di Pierro F, Khu S-T, Savic DA (2007) An investigation on preference order ranking schemefor multiobjective evolutionary optimization. IEEE Trans Evol Comput 11(1):17–45

60. Dorigo M, Stützle T (2004) Ant colony optimization. MIT Press, Cambridge. ISBN:0-262-04219-3

61. Durillo JJ, García-Nieto J, Nebro AJ, Coello Coello CA, Luna F, Alba E (2009) Multi-objective particle swarm optimizers: an experimental comparison. In: Ehrgott M, FonsecaCM, Gandibleux X, Hao J-K, Sevaux M (eds) Evolutionary multi-criterion optimization. 5thinternational conference (EMO 2009). Lecture notes in computer science, vol 5467. Springer,Nantes, pp 495–509

62. Durillo JJ, Nebro AJ, Coello Coello CA, Garcia-Nieto J, Luna F, Alba E (2010) A study ofmultiobjective metaheuristics when solving parameter scalable problems. IEEE Trans EvolComput 14(4):618–635

63. Edgeworth FY (1881) Mathematical psychics. P. Keagan, London64. Eiben AE, Smith JE (2003) Introduction to evolutionary computing. Springer, Berlin. ISBN:3-

540-40184-965. Ekbal A, Saha S (2013) Combining feature selection and classifier ensemble using a

multiobjective simulated annealing approach: application to named entity recognition. SoftComput 17(1):1–16

66. Emmerich M, Giotis A, Özdemir M, Bäck T, Giannakoglou K (2002) Metamodel-assistedevolution strategies. In: Merelo Guervós JJ, Adamidis P, Beyer H-G, Fernández-Villaca nasJ-L, Schwefel H-P (eds) Parallel problem solving from nature—PPSN VII, Granada. Lecturenotes in computer science, vol 2439. Springer, pp 371–380

Page 20: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

20 C. A. Coello Coello

67. Emmerich M, Beume N, Naujoks B (2005) An EMO algorithm using the hypervolumemeasure as selection criterion. In: Coello Coello CA, Hernández Aguirre A, Zitzler E(eds) Evolutionary multi-criterion optimization. Third international conference (EMO 2005),Guanajuato. Lecture notes in computer science, vol 3410. Springer, pp 62–76

68. Eppe S, López-Ibá nez M, Stützle T, De Smet Y (2011) An experimental study of preferencemodel integration into multi-objective optimization heuristics. In: 2011 IEEE congress onevolutionary computation (CEC’2011), New Orleans. IEEE Service Center, pp 2751–2758

69. Esparcia-Alcazar AI, Martínez-García A, García-Sánchez P, Merelo JJ, Mora AM (2013)Towards a multiobjective evolutionary approach to inventory and routing management in aretail chain. In: 2013 IEEE congress on evolutionary computation (CEC’2013), Cancún. IEEEPress, pp 3166–3173. ISBN:978-1-4799-0454-9

70. Falcon-Cardona JG, Coello Coello CA (2017) A new indicator-based many-objective antcolony optimizer for continuous search spaces. Swarm Intell 11(1):71–100

71. Fang G, Xue M, Su M, Hu D, Li Y, Xiong B, Ma L, Meng T, Chen Y, Li J, Li J, Shen J (2012)CCLab-a multi-objective genetic algorithm based combinatorial library desing software andan application for histone deacetylase inhibitor desing. Bioorg Med Chem Lett 22(14):4540–4545

72. Farina M, Amato P (2004) A fuzzy definition of “optimality” for many-criteria optimizationproblems. IEEE Trans Syst Man and Cybern Part A Syst Hum 34(3):315–326

73. Fleischer M (2003) The measure of Pareto optima. Applications to multi-objective meta-heuristics. In: Fonseca CM, Fleming PJ, Zitzler E, Deb K, Thiele L (eds) Evolutionarymulti-criterion optimization. Second international conference (EMO 2003), Faro. Lecturenotes in computer science, vol 2632. Springer, pp 519–533

74. Fogel LJ (1966) Artificial intelligence through simulated evolution. John Wiley, New York75. Fogel DB (1995) Evolutionary computation. Toward a new philosophy of machine intelli-

gence. The Institute of Electrical and Electronic Engineers, New York76. Fogel LJ (1999) Artificial intelligence through simulated evolution. Forty years of evolution-

ary programming. Wiley, New York77. Fonseca CM, Fleming PJ (1993) Genetic algorithms for multiobjective optimization: formu-

lation, discussion and generalization. In: Forrest S (ed) Proceedings of the fifth internationalconference on genetic algorithms, San Mateo. University of Illinois at Urbana-Champaign,Morgan Kauffman Publishers, pp 416–423

78. Forrest S, Perelson AS (1991) Genetic algorithms and the immune system. In: Schwefel H-P, Männer R (eds) Parallel problem solving from nature. Lecture notes in computer science.Springer, Berlin, pp 320–325

79. Freschi F, Repetto M (2006) VIS: an artificial immune network for multi-objective optimiza-tion. Eng Optim 38(8):975–996

80. Freschi F, Coello Coello CA, Repetto M (2009) Multiobjective optimization and artificialimmune systems: a review. In: Mo H (ed) Handbook of research on artificial immune sys-tems and natural computing: applying complex adaptive technologies. Medical InformationScience Reference, Hershey/New York, pp 1–21. ISBN:978-1-60566-310-4

81. Friedrich T, Kroeger T, Neumann F (2011) Weighted preferences in evolutionary multi-objective optimization. In: Wang D, Reynolds M (eds) AI 2011: advances in artificialintelligence, 24th Australasian joint conference, Perth. Lecture notes in computer science,vol 7106. Springer, pp 291–300

82. García-Martínez C, Cordón O, Herrera F (2007) A taxonomy and an empirical analysis ofmultiple objective ant colony optimization algorithms for the bi-criteria TSP. Eur J Oper Res180(1):116–148

83. Garza Fabre M, Toscano Pulido G, Coello Coello CA (2009) Ranking methods for many-objective problems. In: Aguirre AH, Borja RM, García CAR (eds) MICAI 2009: advancesin artificial intelligence. 8th Mexican international conference on artificial intelligence,Guanajuato. Lecture notes in artificial intelligence, vol 5845. Springer, pp 633–645

84. Garza-Fabre M, Rodriguez-Tello E, Toscano-Pulido G (2015) Constraint-handling throughmulti-objective optimization: the hydrophobic-polar model for protein structure prediction.Comput Oper Res 53:128–153

Page 21: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 21

85. Gen M, Cheng R (2000) Genetic algorithms and engineering optimization. Wiley series inengineering design and automation. Wiley, New York

86. Ghisu T, Parks GT, Jaeggi DM, Jarrett JP, Clarkson PJ (2010) The benefits of adaptiveparametrization in multi-objective Tabu search optimization. Eng Optim 42(10):959–981

87. Giel O (2003) Expected runtimes of a simple multi-objective evolutionary algorithm. In:Proceedings of the 2003 congress on evolutionary computation (CEC’2003), vol 3, Canberra.IEEE Press, pp 1918–1925

88. Glover F, Kochenberger GA (eds) (2003) Handbook of metaheuristics. Kluwer AcademicPublishers, Boston. ISBN:1-4020-7263-5

89. Goldberg DE (1989) Genetic algorithms in search, optimization and machine learning.Addison-Wesley Publishing Company, Reading

90. Goldberg DE, Deb K (1991) A comparison of selection schemes used in genetic algorithms.In: Rawlins GJE (ed) Foundations of genetic algorithms. Morgan Kaufmann, San Mateo,pp 69–93

91. Goldberg DE, Richardson J (1987) Genetic algorithm with sharing for multimodal functionoptimization. In: Grefenstette JJ (ed) Genetic algorithms and their applications: proceedingsof the second international conference on genetic algorithms, Hillsdale. Lawrence Erlbaum,pp 41–49

92. Gupta H, Deb K (2005) Handling constraints in robust multi-objective optimization. In: 2005IEEE congress on evolutionary computation (CEC’2005), vol 1, Edinburgh. IEEE ServiceCenter, pp 25–32

93. Hajela P, Lin CY (1992) Genetic search strategies in multicriterion optimal design. StructOptim 4:99–107

94. Hansen MP (1998) Metaheuristics for multiple objective combinatorial optimization. PhDthesis, Institute of Mathematical Modelling, Technical University of Denmark

95. Hansen MP (2000) Tabu search for multiobjective combinatorial optimization: TAMOCO.Control Cybern 29(3):799–818

96. Harada K, Sakuma J, Ono I, Kobayashi S (2007) Constraint-handling method for multi-objective function optimization: Pareto descent repair operator. In: Obayashi S, Deb K, PoloniC, Hiroyasu T, Murata T (eds) Evolutionary multi-criterion optimization, 4th internationalconference (EMO 2007), Matshushima. Lecture notes in computer science, vol 4403.Springer, pp 156–170

97. Heris SMK, Khaloozadeh H (2011) Open- and closed-loop multiobjective optimal strategiesfor HIV therapy using NSGA-II. IEEE Trans Biomed Eng 58(6):1678–1685

98. Hernández Aguirre A, Botello Rionda S, Lizárraga Lizárraga G, Coello Coello C (2004)IS-PAES: multiobjective optimization with efficient constraint handling. In: BurczynskiT, Osyczka A (eds) IUTAM symposium on evolutionary methods in mechanics. KluwerAcademic Publishers, Dordrecht/Boston/London, pp 111–120. ISBN:1-4020-2266-2

99. Hernández Gómez R, Coello Coello CA (2013) MOMBI: a new metaheuristic for many-objective optimization based on the R2 indicator. In: 2013 IEEE congress on evolutionarycomputation (CEC’2013), Cancún. IEEE Press, pp 2488–2495. ISBN:978-1-4799-0454-9

100. Hernández Gómez R, Coello Coello CA, Alba Torres E (2016) A multi-objective evolutionaryalgorithm based on parallel coordinates. In: 2016 genetic and evolutionary computationconference (GECCO’2016), Denver. ACM Press, pp 565–572. ISBN:978-1-4503-4206-3

101. Holland JH (1962) Concerning efficient adaptive systems. In: Yovits MC, Jacobi GT, Gold-stein GD (eds) Self-organizing systems—1962. Spartan Books, Washington, DC, pp 215–230

102. Hong Y-S, Lee H, Tahk M-J (2003) Acceleration of the convergence speed of evolutionaryalgorithms using multi-layer neural networks. Eng Optim 35(1):91–102

103. Horn J, Nafpliotis N, Goldberg DE (1994) A niched Pareto genetic algorithm for multiobjec-tive optimization. In: Proceedings of the first IEEE conference on evolutionary computation,IEEE world congress on computational intelligence, Piscataway, vol 1. IEEE Service Center,pp 82–87

Page 22: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

22 C. A. Coello Coello

104. Hsieh M-N, Chiang T-C, Fu L-C (2011) A hybrid constraint handling mechanism withdifferential evolution for constrained multiobjective optimization. In: 2011 IEEE congress onevolutionary computation (CEC’2011), New Orleans. IEEE Service Center, pp 1785–1792

105. Huang B, Buckley B, Kechadi TM (2010) Multi-objective feature selection by using NSGA-IIfor customer churn prediction in telecommunications. Expert Syst Appl 37(5):3638–3646

106. Huband S, Hingston P, White L, Barone L (2003) An evolution strategy with probabilistic mu-tation for multi-objective optimisation. In: Proceedings of the 2003 congress on evolutionarycomputation (CEC’2003), Canberra, vol 3. IEEE Press, pp 2284–2291

107. Husbands P (1994) Distributed coevolutionary genetic algorithms for multi-criteria and multi-constraint optimisation. In: Fogarty TC (ed) Evolutionary computing. AIS workshop. Selectedpapers. Lecture notes in computer science, vol 865. Springer, pp 150–165

108. Hüscken M, Jin Y, Sendhoff B (2005) Structure optimization of neural networks foraerodynamic optimization. Soft Comput 9(1):21–28

109. Ibaraki T, Nonobe K, Yagiura M (eds) (2005) Metaheuristics. Progress as real problemsolvers. Springer, New York. ISBN:978-0-387-25382-4

110. Iordache R, Iordache S, Moldoveanu F (2014) A framework for the study of preferenceincorporation in multiobjective evolutionary algorithms. In: 2014 genetic and evolutionarycomputation conference (GECCO 2014), Vancouver. ACM Press, pp 621–628. ISBN:978-1-4503-2662-9

111. Iredi S, Merkle D, Middendorf M (2001) Bi-criterion optimization with multi colony antalgorithms. In: Zitzler E, Deb K, Thiele L, Coello Coello CA, Corne D (eds) First internationalconference on evolutionary multi-criterion optimization. Lecture notes in computer science,vol 1993. Springer, pp 359–372

112. Jain H, Deb K (2014) An evolutionary many-objective optimization algorithm usingreference-point based nondominated sorting approach, part II: handling constraints andextending to an adaptive approach. IEEE Trans Evol Comput 18(4):602–622

113. Jensen MT (2003) Reducing the run-time complexity of multiobjective EAs: the NSGA-IIand other algorithms. IEEE Trans Evol Comput 7(5):503–515

114. Jin Y, Sendhoff B, Körner E (2005) Evolutionary multi-objective optimization for simul-taneous generation of signal-type and symbol-type representations. In: Coello Coello CA,Hernández Aguirre A, Zitzler E (eds) Evolutionary multi-criterion optimization. Third inter-national conference, EMO 2005, Guanajuato. Lecture notes in computer science, vol 3410.Springer, pp 752–766

115. Kelaiaia R, Zaatri A, Company O (2012) Multiobjective optimization of 6-dof UPS parallelmanipulators. Adv Robot 26(16):1885–1913

116. Kennedy J, Eberhart RC (2001) Swarm intelligence. Morgan Kaufmann Publishers, SanFrancisco

117. Kita H, Yabumoto Y, Mori N, Nishikawa Y (1996) Multi-objective optimization by means ofthe thermodynamical genetic algorithm. In: Voigt H-M, Ebeling W, Rechenberg I, SchwefelH-P (eds) Parallel problem solving from nature—PPSN IV. Lecture notes in computerscience, Berlin. Springer, pp 504–512

118. Knowles J (2006) ParEGO: a hybrid algorithm with on-line landscape approximation forexpensive multiobjective optimization problems. IEEE Trans Evol Comput 10(1):50–66

119. Knowles JD, Corne DW (2000) Approximating the nondominated front using the paretoarchived evolution strategy. Evol Comput 8(2):149–172

120. Knowles J, Corne D (2003) Properties of an adaptive archiving algorithm for storingnondominated vectors. IEEE Trans Evol Comput 7(2):100–116

121. Knowles J, Corne D (2007) Quantifying the effects of objective space dimension inevolutionary multiobjective optimization. In: Obayashi S, Deb K, Poloni C, Hiroyasu T,Murata T (eds) Evolutionary multi-criterion optimization, 4th international conference (EMO2007), Matshushima. Lecture notes in computer science, vol 4403. Springer, pp 757–771

122. Lahsasna A, Ainon RN, Zainuddin R, Bulgiba A (2012) Design of a fuzzy-based decisionsupport system for coronary heart disease diagnosis. J Med Syst 36(5):3293–3306

Page 23: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 23

123. Larzabal E, Cubillos JA, Larrea M, Irigoyen E, Valera JJ (2012) Soft computing testing in realindustrial platforms for process intelligent control. In: Snášel V, Abraham A, Corchado ES(eds) Soft computing models in industrial and environmental applications, 7th internationalconference (SOCO’12). Advances in intelligent systems and computing, vol 188. Springer,Ostrava, pp 221–230

124. Laumanns M, Thiele L, Deb K, Zitzler E (2002) Combining convergence and diversity inevolutionary multi-objective optimization. Evol Comput 10(3):263–282. Fall

125. Laumanns M, Thiele L, Zitzler E (2004) Running time analysis of multiobjective evolutionaryalgorithms on Pseudo-Boolean functions. IEEE Trans Evol Comput 8(2):170–182

126. Levene C, Correa E, Blanch EW, Goodacre R (2012) Enhancing surface enhanced ramanscattering (SERS) detection of propranolol with multiobjective evolutionary optimization.Anal Chem 84(18):7899–7905

127. Li J-Q, Pan Q-K, Gao K-Z (2011) Pareto-based discrete artificial bee colony algorithm formulti-objective flexible job shop scheduling problems. Int J Adv Manuf Tech 55(9–12):1159–1169

128. López Jaimes A, Coello Coello CA, Chakraborty D (2008) Objective reduction usinga feature selection technique. In: 2008 genetic and evolutionary computation conference(GECCO’2008), Atlanta. ACM Press, pp 674–680. ISBN:978-1-60558-131-6

129. López Jaimes A, Santana Quintero LV, Coello Coello CA (2009) Ranking methods inmany-objective evolutionary algorithms. In: Chiong R (ed) Nature-inspired algorithms foroptimisation. Springer, Berlin, pp 413–434. ISBN:978-3-642-00266-3

130. Luh G-C, Chueh C-H, Liu W-W (2003) MOIA: multi-objective immune algorithm. EngOptim 35(2):143–164

131. Mahmoodabadi MJ, Arabani Mostaghim S, Bagheri A, Nariman-zadeh N (2013) Paretooptimal design of the decoupled sliding mode controller for an inverted pendulum systemand its stability simulation via Java programming. Math Comput Model 57(5–6):1070–1082

132. Menchaca-Mendez A, Coello Coello CA (2013) Selection operators based on maximin fitnessfunction for multi-objective evolutionary algorithms. In: Purshouse RC, Fleming PJ, FonsecaCM, Greco S, Shaw J (eds) Evolutionary multi-criterion optimization, 7th internationalconference (EMO 2013). Lecture notes in computer science, vol 7811, Sheffield. Springer,pp 215–229

133. Mezura-Montes E, Coello Coello CA (2008) Constrained optimization via multiobjectiveevolutionary algorithms. In: Knowles J, Corne D, Deb K (eds) Multi-objective problemsolving from nature: from concepts to applications. Springer, Berlin, pp 53–75. ISBN:978-3-540-72963-1

134. Mezura-Montes E, Reyes-Sierra M, Coello Coello CA (2008) Multi-objective optimizationusing differential evolution: a survey of the state-of-the-art. In: Chakraborty UK (ed)Advances in differential evolution. Springer, Berlin, pp 173–196. ISBN:978-3-540-68827-3

135. Miettinen KM (1999) Nonlinear multiobjective optimization. Kluwer Academic Publishers,Boston

136. Mishra BSP, Dehuri S, Mall R, Ghosh A (2011) Parallel single and multiple objectives geneticalgorithms: a survey. Int J Appl Evol Comput 2(2):21–57

137. Moncayo-Martinez LA, Zhang DZ (2011) Multi-objective ant colony optimisation: a meta-heuristic approach to supply chain design. Int J Prod Econ 131(1):407–420

138. Montaño AA, Coello Coello CA, Mezura-Montes E (2010) MODE-LD+SS: a novel differ-ential evolution algorithm incorporating local dominance and scalar selection mechanismsfor multi-objective optimization. In: 2010 IEEE congress on evolutionary computation(CEC’2010), Barcelona. IEEE Press, pp 3284–3291

139. Moore J, Chapman R, Dozier G (2000) Multiobjective particle swarm optimization. In: TurnerAJ (ed) Proceedings of the 38th annual southeast regional conference, Clemson. ACM Press,pp 56–57

140. Mora AM, Garcia-Sanchez P, Merelo JJ, Castillo PA (2013) Pareto-based multi-colonymulti-objective ant colony optimization algorithms: an island model proposal. Soft Comput17(7):1175–1207

Page 24: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

24 C. A. Coello Coello

141. Narayanan L, Subramanian B, Arokiaswami A, Iruthayarajan MW (2012) Optimal placementof mobile antenna in an urban area using evolutionary multiobjective optimization. MicrowOpt Technol Lett 54(3):737–743

142. Nebro AJ, Durillo JJ, Garcia-Nieto J, Coello Coello CA, Luna F, Alba E (2009) SMPSO:a new PSO-based metaheuristic for multi-objective optimization. In: 2009 IEEE symposiumon computational intelligence in multi-criteria decision-making (MCDM’2009), Nashville.IEEE Press, pp 66–73. ISBN:978-1-4244-2764-2

143. Neumann F (2007) Expected runtimes of a simple evolutionary algorithm for the multi-objective minimum spanning tree problem. Eur J Oper Res 181(3):1620–1629

144. Neumann F (2012) Computational complexity analysis of multi-objective genetic pro-gramming. In: 2012 genetic and evolutionary computation conference (GECCO’2012),Philadelphia. ACM Press, pp 799–806. ISBN:978-1-4503-1177-9

145. Ning X, Lam KC (2013) Cost-safety trade-off in unequal-area construction site layoutplanning. Autom Constr 32:96–103

146. Olmo JL, Romero JR, Ventura S (2012) Classification rule mining using ant programmingguided by grammar with multiple Pareto fronts. Soft Comput 16(12):2143–2163

147. Ong YS, Nair PB, Keane AJ, Wong KW (2004) Surrogate-assisted evolutionary optimizationframeworks for high-fidelity engineering design problems. In: Jin Y (ed) Knowledge incorpo-ration in evolutionary computation. Studies in fuzziness and soft computing. Springer, Berlin,Germany, pp 307–332

148. Oyama A, Shimoyama K, Fujii K (2007) New constraint-handling method for multi-objective and multi-constraint evolutionary optimization. Trans Jpn Soc Aeronaut Space Sci50(167):56–62

149. Pacheco J, Marti R (2006) Tabu search for a multi-objective routing problem. J Oper Res Soc57(1):29–37

150. Pardalos PM, Siskos Y, Zopounidis C (eds) (1995) Advances in multiciteria analysis.Springer-Science+Business Media, B.V. ISBN:978-1-4419-4748-2

151. Pardalos PM, Žilinskas A, Žilinskas J (2017) Non-convex multi-objective optimization.Springer, Cham. ISBN:978-3-319-61005-4

152. Pareto V (1896) Cours D’Economie Politique, vol I and II. F. Rouge, Lausanne153. Parsopoulos KE, Taoulis DK, Pavlidis NG, Plagianakos VP, Vrahatis MN (2004) Vector eval-

uated differential evolution for multiobjective optimization. In: 2004 congress on evolutionarycomputation (CEC’2004), Portland, vol 1. IEEE Service Center, pp 204–211

154. Phan DH, Suzuki J (2013) R2-IBEA: R2 indicator based evolutionary algorithm for multi-objective optimization. In: 2013 IEEE congress on evolutionary computation (CEC’2013),Cancún. IEEE Press, pp 1836–1845. ISBN:978-1-4799-0454-9

155. Pierrard T, Coello Coello CA (2012) A multi-objective artificial immune system based onhypervolume. In: Coelo Coello CA, Greensmith J, Krasnogor N, Liò P, Nicosia G, Pavone M(eds) Artificial immune systems, 11th international conference (ICARIS 2012). Lecture notesin computer science, vol 7597. Springer, Taormina, pp 14–27. ISBN:978-3-642-33756-7

156. Pierret S (1999) Turbomachinery blade design using a Navier-Stokes solver and artificialneural network. ASME J Turbomach 121(3):326–332

157. Pilato C, Loiacono D, Tumeo A, Ferrandi F, Lanzi PL, Sciuto D (2010) Speeding-upexpensive evaluations in high-level synthesis using solution modeling and fitness inheritance.In: Tenne Y, Goh C-K (eds) Computational intelligence in expensive optimization problems.Springer, Berlin, pp 701–723. ISBN:978-3-642-10700-9

158. Rahimi-Vahed AR, Javadi B, Rabbani M, Tavakkoli-Moghaddam R (2008) A multi-objectivescatter search for a bi-criteria no-wait flow shop scheduling problem. Eng Optim 40(4):331–346

159. Rakshit P, Konar A, Nagar AK (2014) Artificial bee colony induced multi-objective optimiza-tion in presence of noise. In: 2014 IEEE congress on evolutionary computation (CEC’2014),Beijing. IEEE Press, pp 3176–3183. ISBN:978-1-4799-1488-3

160. Rao ARM, Lakshmi K (2008) Multi-objective scatter search algorithm for combinatorial op-timisation. In: Thulasiram R (ed) ADCOM: 2008 16th international conference on advanced

Page 25: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 25

computing and communications, Chennai. IEEE Press, pp 303–308. ISBN:978-1-4244-2962-2

161. Rao BS, Vaisakh K (2013) Multi-objective adaptive clonal selection algorithm for solvingenvironmental/economic dispatch and OPF problems with load uncertainty. Int J Electr PowerEnergy Syst 53:390–408

162. Rasheed K, Ni X, Vattam S (2005) Comparison of methods for developing dynamic reducedmodels for design optimization. Soft Comput 9(1):29–37

163. Ratle A (1998) Accelerating the convergence of evolutionary algorithms by fitness landscapeapproximation. In: Eiben AE, Bäck T, Schoenauer M, Schwefel H-P (eds) Parallel problemsolving from nature—PPSN V, 5th international conference, Amsterdam. Lecture notes incomputer science, vol 1498. Springer, pp 87–96

164. Reyes Sierra M, Coello Coello CA (2005) Fitness inheritance in multi-objective particleswarm optimization. In: 2005 IEEE swarm intelligence symposium (SIS’05), Pasadena. IEEEPress, pp 116–123

165. Reyes Sierra M, Coello Coello CA (2005) Improving PSO-based multi-objective opti-mization using crowding, mutation and �-dominance. In: Coello Coello CA, HernándezAguirre A, Zitzler E (eds) Evolutionary multi-criterion optimization. Third internationalconference (EMO 2005), Guanajuato. Lecture notes in computer science, vol 3410. Springer,pp 505–519

166. Reyes Sierra M, Coello Coello CA (2005) A study of fitness inheritance and approximationtechniques for multi-objective particle swarm optimization. In: 2005 IEEE congress onevolutionary computation (CEC’2005), Edinburgh, vol 1. IEEE Service Center, pp 65–72

167. Reyes-Sierra M, Coello Coello CA (2006) Multi-objective particle swarm optimizers: asurvey of the state-of-the-art. Int J Comput Intell Res 2(3):287–308

168. Rodríguez Villalobos CA, Coello Coello CA (2012) A new multi-objective evolutionaryalgorithm based on a performance assessment indicator. In: 2012 genetic and evolutionarycomputation conference (GECCO’2012), Philadelphia. ACM Press, pp 505–512. ISBN:978-1-4503-1177-9

169. Rohling G (2008) Methods for decreasing the number of objective evaluations for inde-pendent computationally expensive objective problems. In: 2008 congress on evolutionarycomputation (CEC’2008), Hong Kong. IEEE Service Center, pp 3304–3309

170. Romero CEM, Manzanares EM (1999) MOAQ an ant-Q algorithm for multiple objectiveoptimization problems. In: Banzhaf W, Daida J, Eiben AE, Garzon MH, Honavar V, JakielaM, Smith RE (eds) Genetic and evolutionary computing conference (GECCO’99), SanFrancisco, vol 1. Morgan Kaufmann, pp 894–901

171. Romero-Garcia V, Sanchez-Perez JV, Garcia-Raffi LM (2012) Molding the acoustic attenu-ation in quasi-ordered structures: experimental realization. Appl Phys Express 5(8). Articlenumber:087301

172. Ronco CCD, Ponza R, Benini E (2014) Aerodynamic shape optimization in aeronautics: afast and effective multi-objective approach. Arch Comput Methods Eng 21(3):189–271

173. Rosenberg R (1967) Simulation of genetic populations with biochemical properties. PhDthesis, Department of Communication Sciences, University of Michigan, Ann Arbor

174. Rudolph G, Agapie A (2000) Convergence properties of some multi-objective evolutionaryalgorithms. In: Proceedings of the 2000 conference on evolutionary computation, Piscataway,vol 2. IEEE Press, pp 1010–1016

175. Saha I, Maulik U, Bandyopadhyay S, Plewczynski D (2011) Unsupervised and supervisedlearning approaches together for microarray analysis. Fundamenta Informaticae 106(1):45–73

176. Sahoo NC, Ganguly S, Das D (2012) Fuzzy-Pareto-dominance driven possibilistic modelbased planning of electrical distribution systems using multi-objective particle swarm opti-mization. Expert Syst Appl 39(1):881–893

177. Santana-Quintero LV, Arias Montaño A, Coello Coello CA (2010) A review of techniques forhandling expensive functions in evolutionary multi-objective optimization. In: Tenne Y, GohC-K (eds) Computational intelligence in expensive optimization problems. Springer, Berlin,pp 29–59. ISBN:978-3-642-10700-9

Page 26: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

26 C. A. Coello Coello

178. Schaffer JD (1984) Multiple objective optimization with vector evaluated genetic algorithms.PhD thesis, Vanderbilt University, Nashville

179. Schaffer JD (1985) Multiple objective optimization with vector evaluated genetic algorithms.In: Genetic algorithms and their applications: proceedings of the first international conferenceon genetic algorithms. Lawrence Erlbaum, pp 93–100

180. Schuetze O, Laumanns M, Tantar E, Coello Coello CA, Talbi E (2007) Convergence ofstochastic search algorithms to gap-free Pareto front approximations. In: Thierens D (ed)2007 genetic and evolutionary computation conference (GECCO’2007), London, vol 1. ACMPress, pp 892–899

181. Schuetze O, Laumanns M, Tantar E, Coello Coello CA, Talbi E (2010) Computing gap freePareto front approximations with stochastic search algorithms. Evol Comput 18(1):65–96.Spring

182. Schütze O, Lara A, Coello Coello CA (2011) On the influence of the number of objectives onthe hardness of a multiobjective optimization problem. IEEE Trans Evol Comput 15(4):444–455

183. Schütze O, Esquivel X, Lara A, Coello Coello CA (2012) Using the averaged hausdorffdistance as a performance measure in evolutionary multiobjective optimization. IEEE TransEvol Comput 16(4):504–522

184. Schwefel H-P (1965) Kybernetische evolution als strategie der experimentellen forschung inder strömungstechnik. Dipl.-Ing. thesis (in German)

185. Schwefel H-P (1981) Numerical optimization of computer models. Wiley, Chichester186. Sharifi S, Massoudieh A (2012) A novel hybrid mechanistic-data-driven model identification

framework using NSGA-II. J Hydroinf 14(3):697–715187. Sharma D, Collet P (2013) Implementation techniques for massively parallel multi-objective

optimization. In: Tsutsui S, Collet P (eds) Massively parallel evolutionary computation onGPGPUs. Springer, pp 267–286. ISBN:978-3-642-37958-1

188. Shaw KJ, Fleming PJ (1996) Initial study of practical multi-objective genetic algorithms forscheduling the production of chilled ready meals. In: Proceedings of mendel’96, the 2ndinternational mendel conference on genetic algorithms, Brno

189. Singh HK, Isaacs A, Ray T, Smith W (2008) A simulated annealing algorithm for constrainedmulti-objective optimization. In: 2008 congress on evolutionary computation (CEC’2008),Hong Kong. IEEE Service Center, pp 1655–1662

190. Smith KI (2006) A study of simulated annealing techniques for multi-objective optimisation.PhD thesis, University of Exeter

191. Smith RE, Forrest S, Perelson AS (1992) Searching for diverse, cooperative populations withgenetic algorithms. Technical report TCGA No. 92002, University of Alabama, Tuscaloosa

192. Smith RE, Forrest S, Perelson AS (1993) Population diversity in an immune system model:implications for genetic search. In: Whitley LD (ed) Foundations of genetic algorithms 2.Morgan Kaufmann Publishers, San Mateo, pp 153–165

193. Srinivas N, Deb K (1994) Multiobjective optimization using nondominated sorting in geneticalgorithms. Evol Comput 2(3):221–248. Fall

194. Suman B, Kumar P (2006) A survey of simulated annealing as a tool for single andmultiobjective optimization. J Oper Res Soc 57(10):1143–1160

195. Surry PD, Radcliffe NJ (1997) The COMOGA method: constrained optimisation by multiob-jective genetic algorithms. Control Cybern 26(3):391–412

196. Sweetapple C, Fu G, Butler D (2014) Multi-objective optimisation of wastewater treatmentplant control to reduce greenhouse gas emissions. Water Res 55:52–62

197. Tagawa K, Shimizu H, Nakamura H (2011) Indicator-based differential evolution usingexclusive hypervolume approximation and parallelization for multi-core processors. In: 2011genetic and evolutionary computation conference (GECCO’2011), Dublin. ACM Press,pp 657–664

198. Talbi E-G (ed) (2009) Metaheuristics. From design to implementation. Wiley, New Jersey.ISBN:978-0-470-27858-1

Page 27: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

Multi-objective Optimization 27

199. Talukder AKMKA, Kirley M, Buyya R (2009) Multiobjective differential evolutionfor scheduling workflow applications on global Grids. Concurrency Comput-Pract Exp21(13):1742–1756

200. Tan KC, Lee TH, Khor EF (2001) Evolutionary algorithms with dynamic population size andlocal exploration for multiobjective optimization. IEEE Trans Evol Comput 5(6):565–588

201. Tan KC, Khor EF, Lee TH (2005) Multiobjective evolutionary algorithms and applications.Springer, London. ISBN:1-85233-836-9

202. Toscano Pulido G, Coello Coello CA (2003) The micro genetic algorithm 2: towards onlineadaptation in evolutionary multiobjective optimization. In: Fonseca CM, Fleming PJ, ZitzlerE, Deb K, Thiele L (eds) Evolutionary multi-criterion optimization. Second internationalconference (EMO 2003), Faro. Lecture notes in computer science, vol 2632. Springer,pp 252–266

203. Toscano Pulido G, Coello Coello CA (2004) using clustering techniques to improve theperformance of a particle swarm optimizer. In: Deb K et al (ed) Genetic and evolutionarycomputation–GECCO 2004. Proceedings of the genetic and evolutionary computation con-ference. Part I, Seattle, Washington. Lecture notes in computer science, vol 3102. Springer,pp 225–237

204. Tušar T, Filipic B (2007) Differential evolution versus genetic algorithms in multiobjectiveoptimization. In: Obayashi S, Deb K, Poloni C, Hiroyasu T, Murata T (eds) Evolutionarymulti-criterion optimization, 4th international conference (EMO 2007), Matshushima. Lec-ture notes in computer science, vol 4403. Springer, pp 257–271

205. Ulmer H, Streicher F, Zell A (2003) Model-assisted steady-state evolution strategies. In:Cantú-Paz E et al (ed) Genetic and evolutionary computation—GECCO 2003. Proceedings,Part I. Lecture notes in computer science, vol 2723. Springer, pp 610–621

206. Ulmer H, Streichert F, Zell A (2003) Evolution startegies assisted by Gaussian processes withimproved pre-selection criterion. In: Proceedings of the 2003 IEEE congress on evolutionarycomputation (CEC’2003), Canberra, vol 1. IEEE Press, pp 692–699

207. Vargas DEC, Lemonge ACC, Barbosa HJC, Bernardino HS (2013) Differential evolutionwith the adaptive penalty method for constrained multiobjective optimization. In: 2013 IEEEcongress on evolutionary computation (CEC’2013), Cancún. IEEE Press, pp 1342–1349.ISBN:978-1-4799-0454-9

208. Venske SM, Goncalves RA, Delgado MR (2014) ADEMO/D: multiobjective optimization byan adaptive differential evolution algorithm. Neurocomputing 127:65–77

209. Villalobos-Arias M, Coello Coello CA, Hernández-Lerma O (2006) Asymptotic convergenceof metaheuristics for multiobjective optimization problems. Soft Comput 10(11):1001–1005

210. Wang J, Terpenny JP (2005) Interactive preference incorporation in evolutionary engineeringdesign. In: Jin Y (ed) Knowledge incorporation in evolutionary computation. Springer,Berlin/Heidelberg, pp 525–543. ISBN:3-540-22902-7

211. Wang X, Tang J, Yung K (2009) Optimization of the multi-objective dynamic cell formationproblem using a scatter search approach. Int J Adv Manuf Technol 44(3–4):318–329

212. Woldesenbet YG, Tessema BG, Yen GG (2007) Constraint handling in multi-objectiveevolutionary optimization. In: 2007 IEEE congress on evolutionary computation (CEC’2007),Singapore. IEEE Press, pp 3077–3084

213. Won KS, Ray T (2004) Performance of kriging and cokriging based surrogate models withinthe unified framework for surrogate assisted optimization. In: 2004 congress on evolutionarycomputation (CEC’2004), Portland, vol 2. IEEE Service Center, pp 1577–1585

214. Xu J, Li Z (2012) Multi-objective dynamic costruction site layout plannig in fuzzy randomenvironment. Autom Constr 27:155–169

215. Yong W, Zixing C (2005) A constrained optimization evolutionary algorithm based onmultiobjective optimization techniques. In: 2005 IEEE congress on evolutionary computation(CEC’2005), Edinburgh, vol 2. IEEE Service Center, pp 1081–1087

216. Zapotecas Martínez S, Coello Coello CA (2011) A multi-objective particle swarm opti-mizer based on decomposition. In: 2011 genetic and evolutionary computation conference(GECCO’2011), Dublin. ACM Press, pp 69–76

Page 28: Multi-objective Optimization · Multi-objective optimization metaheuristics evolutionary algorithms optimization Introduction Metaheuristics have been widely used for solving different

28 C. A. Coello Coello

217. Zapotecas Martínez S, Coello Coello CA (2013) Combining surrogate models and localsearch for dealing with expensive multi-objective optimization problems. In: 2013 IEEEcongress on evolutionary computation (CEC’2013), Cancún. IEEE Press, pp 2572–2579.ISBN:978-1-4799-0454-9

218. Zavala GR, Nebro AJ, Luna F, Coello Coello CA (2014) A survey of multi-objectivemetaheuristics applied to structural optimization. Struct Multidiscip Optim 49(4):537–558

219. Zeng SY, Kang LS, Ding LX (2004) An orthogonal multi-objective evolutionary algorithmfor multi-objective optimization problems with constraints. Evol Comput 12(1):77–98. Spring

220. Zhang D, Gao Z (2012) Forward kinematics, performance analysis, and multi-objectiveoptimization of a bio-inspired parallel manipulator. Robot Comput Intregr Manuf 28(4):484–492

221. Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decompo-sition. IEEE Trans Evol Comput 11(6):712–731

222. Zhang Q, Liu W, Tsang E, Virginas B (2010) Expensive multiobjective optimization byMOEA/D with Gaussian process model. IEEE Trans Evol Comput 14(3):456–474

223. Zheng Y-J, Chen S-Y (2013) Cooperative particle swarm optimization for multiobjectivetransportation planning. Appl Intell 39(1):202–216

224. Zhu J, Cai X, Pan P, Gu R (2014) Multi-objective structural optimization design of horizontal-axis wind turbine blades using the non-dominated sorting genetic algorithm II and finiteelement method. Energies 7(2):988–1002

225. Žilinskas A (2013) On the worst-case optimal multi-objective global optimization. Opt Lett7:1921–1928

226. Žilinskas A (2014) A statistical model-based algorithm for ‘black-box’ multi-objectiveoptimisation. Int J Syst Sci 45(1):82–93

227. Žilinskas A, Fraga ES, Mackuté A (2006) Data analysis and visualisation for robust multi-criteria process optimisation. Comput Chem Eng 30:1061–1071

228. Žilinskas J, Goldengorin B, Pardalos PM (2015) Pareto-optimal front of cell formationproblem in group technology. J Glob Optim 61:91–108

229. Zitzler E, Künzli S (2004) Indicator-based selection in multiobjective search. In: Yao Xet al (ed) Parallel problem solving from nature – PPSN VIII, Birmingham. Lecture notesin computer science, vol 3242. Springer, pp 832–842

230. Zitzler E, Deb K, Thiele L (1999) Comparison of multiobjective evolutionary algorithms ontest functions of different difficulty. In: Wu AS (ed) Proceedings of the 1999 genetic andevolutionary computation conference. Workshop program, Orlando, pp 121–122

231. Zitzler E, Laumanns M, Thiele L (2002) SPEA2: improving the strength Pareto evolutionaryalgorithm. In: Giannakoglou K, Tsahalis D, Periaux J, Papailou P, Fogarty T (eds) EUROGEN2001. Evolutionary methods for design, optimization and control with applications toindustrial problems, Athens, pp 95–100

232. Zitzler E, Thiele L, Laumanns M, Fonseca CM, da Fonseca VG (2003) Performanceassessment of multiobjective optimizers: an analysis and review. IEEE Trans Evol Comput7(2):117–132

233. Zitzler E, Laumanns M, Bleuler S (2004) A tutorial on evolutionary multiobjective opti-mization. In: Gandibleux X, Sevaux M, Sörensen K, T’kindt V (eds) Metaheuristics formultiobjective optimisation, Berlin. Lecture notes in economics and mathematical systems,vol 535. Springer, pp 3–37