a genetic algorithm for unconstrained multi...

14
A genetic algorithm for unconstrained multi-objective optimization Qiang Long a , Changzhi Wu b,n , Tingwen Huang c , Xiangyu Wang b,d a School of Science, Southwest University of Science and Technology, Mianyang, China b Australasian Joint Research Centre for Building Information Modelling, School of Built Environment, Curtin University, Australia c Texas A&M University at Qatar, Doha, P.O.Box 23874, Qatar d Department of Housing and Interior Design, Kyung Hee University, Seoul, South Korea article info Article history: Received 29 August 2014 Received in revised form 18 November 2014 Accepted 19 January 2015 Available online 28 January 2015 Keywords: Genetic algorithm Optimal sequence method Multi-objective optimization Numerical performance evaluation abstract In this paper, we propose a genetic algorithm for unconstrained multi-objective optimization. Multi- objective genetic algorithm (MOGA) is a direct method for multi-objective optimization problems. Compared to the traditional multi-objective optimization method whose aim is to nd a single Pareto solution, MOGA tends to nd a representation of the whole Pareto frontier. During the process of solving multi-objective optimization problems using genetic algorithm, one needs to synthetically consider the tness, diversity and elitism of solutions. In this paper, more specically, the optimal sequence method is altered to evaluate the tness; cell-based density and Pareto-based ranking are combined to achieve diversity; and the elitism of solutions is maintained by greedy selection. To compare the proposed method with others, a numerical performance evaluation system is developed. We test the proposed method by some well known multi-objective benchmarks and compare its results with other MOGASs'; the result show that the proposed method is robust and efcient. & 2015 Elsevier B.V. All rights reserved. 1. Introduction In this paper, we consider the following multi-objective opti- mization problem: ðMOPÞ Minimize F ðxÞ Subject to x A X; ( ð1Þ where F ðxÞ¼ðf 1 ðxÞ; f 2 ðxÞ; ; f p ðxÞÞ T is a vector-valued function, X ¼fx A R n : l b rx ru b g R n is a box set, l b and u b are lower and upper bounds, respectively. We suppose that f i ðxÞ; i ¼ 1; 2; ; p are Lipschitz continuous but not necessarily differentiable. Multi-objective optimization has extensive applications in engineering and management [2,28,29]. Most of the optimization problems appearing in real-world applications have multiple objectives; they can be modeled as multi-objective optimization problems. However, due to the theoretical and computational challenges, it is not easy to solve multi-objective optimization problems. Therefore, multi-objective optimization attracts lots of researches over the last few decades. So far, there are two types of methods to solve multi-objective optimization problems: indirect and direct methods. The indirect method converts multiple objectives into a single one. One strategy is to combine the multiple objective functions using the utility theory or the weighted sum method. The difculties for such methods are the selection of utility function or proper weights so as to satisfy the decision-maker's preferences, and furthermore, the greatest deciency of the (linear) weighted sum method is that we cannot obtain the concave part of the Pareto frontier. Another indirect method is to formulate the multiple objectives, except one, as constraints. However, it is not easy to determine the upper bounds of these objectives. On the one hand, small upper bounds could exclude some Pareto solutions; on the other hand, large upper bounds could enlarge the objective function value space which leads to some sub-Pareto solution. Additionally, indirect method can only obtain a single Pareto solution in each run. However, in practical applications, decision- makers often prefer a number of Pareto solutions so that they can choose one strategy according to their preferences. Direct methods devote themselves to explore the entire set of Pareto solutions or a representative subset. However, it is extre- mely hard or impossible to obtain the entire set of Pareto solutions for most multi-objective optimization problems, except some simple cases. Therefore, stepping back to a representative subset is preferred. Genetic algorithm (GA), as a population-based algo- rithm, is a good choice to achieve this goal. The generic single- objective genetic algorithm can be modied to nd a set of multiple non-dominated solutions in a single run. The ability of the genetic algorithm to simultaneously search different regions of a solution space makes it possible to nd a diverse set of solutions for difcult problems. The crossover operator of the genetic Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/swevo Swarm and Evolutionary Computation http://dx.doi.org/10.1016/j.swevo.2015.01.002 2210-6502/& 2015 Elsevier B.V. All rights reserved. n Corresponding author. E-mail address: [email protected] (C. Wu). Swarm and Evolutionary Computation 22 (2015) 114

Upload: others

Post on 17-May-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

A genetic algorithm for unconstrained multi-objective optimization

Qiang Long a, Changzhi Wu b,n, Tingwen Huang c, Xiangyu Wang b,d

a School of Science, Southwest University of Science and Technology, Mianyang, Chinab Australasian Joint Research Centre for Building Information Modelling, School of Built Environment, Curtin University, Australiac Texas A&M University at Qatar, Doha, P.O.Box 23874, Qatard Department of Housing and Interior Design, Kyung Hee University, Seoul, South Korea

a r t i c l e i n f o

Article history:Received 29 August 2014Received in revised form18 November 2014Accepted 19 January 2015Available online 28 January 2015

Keywords:Genetic algorithmOptimal sequence methodMulti-objective optimizationNumerical performance evaluation

a b s t r a c t

In this paper, we propose a genetic algorithm for unconstrained multi-objective optimization. Multi-objective genetic algorithm (MOGA) is a direct method for multi-objective optimization problems.Compared to the traditional multi-objective optimization method whose aim is to find a single Paretosolution, MOGA tends to find a representation of the whole Pareto frontier. During the process of solvingmulti-objective optimization problems using genetic algorithm, one needs to synthetically consider thefitness, diversity and elitism of solutions. In this paper, more specifically, the optimal sequence method isaltered to evaluate the fitness; cell-based density and Pareto-based ranking are combined to achievediversity; and the elitism of solutions is maintained by greedy selection. To compare the proposedmethod with others, a numerical performance evaluation system is developed. We test the proposedmethod by some well known multi-objective benchmarks and compare its results with other MOGASs';the result show that the proposed method is robust and efficient.

& 2015 Elsevier B.V. All rights reserved.

1. Introduction

In this paper, we consider the following multi-objective opti-mization problem:

ðMOPÞMinimize FðxÞSubject to xAX;

(ð1Þ

where FðxÞ ¼ ðf 1ðxÞ; f 2ðxÞ;…; f pðxÞÞT is a vector-valued function,X ¼ fxARn : lbrxrubg �Rn is a box set, lb and ub are lower andupper bounds, respectively. We suppose that f iðxÞ; i¼ 1;2;…; p areLipschitz continuous but not necessarily differentiable.

Multi-objective optimization has extensive applications inengineering and management [2,28,29]. Most of the optimizationproblems appearing in real-world applications have multipleobjectives; they can be modeled as multi-objective optimizationproblems. However, due to the theoretical and computationalchallenges, it is not easy to solve multi-objective optimizationproblems. Therefore, multi-objective optimization attracts lots ofresearches over the last few decades.

So far, there are two types of methods to solve multi-objectiveoptimization problems: indirect and direct methods. The indirectmethod converts multiple objectives into a single one. Onestrategy is to combine the multiple objective functions using the

utility theory or the weighted sum method. The difficulties forsuch methods are the selection of utility function or properweights so as to satisfy the decision-maker's preferences, andfurthermore, the greatest deficiency of the (linear) weighted summethod is that we cannot obtain the concave part of the Paretofrontier. Another indirect method is to formulate the multipleobjectives, except one, as constraints. However, it is not easy todetermine the upper bounds of these objectives. On the one hand,small upper bounds could exclude some Pareto solutions; on theother hand, large upper bounds could enlarge the objectivefunction value space which leads to some sub-Pareto solution.Additionally, indirect method can only obtain a single Paretosolution in each run. However, in practical applications, decision-makers often prefer a number of Pareto solutions so that they canchoose one strategy according to their preferences.

Direct methods devote themselves to explore the entire set ofPareto solutions or a representative subset. However, it is extre-mely hard or impossible to obtain the entire set of Pareto solutionsfor most multi-objective optimization problems, except somesimple cases. Therefore, stepping back to a representative subsetis preferred. Genetic algorithm (GA), as a population-based algo-rithm, is a good choice to achieve this goal. The generic single-objective genetic algorithm can be modified to find a set ofmultiple non-dominated solutions in a single run. The ability ofthe genetic algorithm to simultaneously search different regions ofa solution space makes it possible to find a diverse set of solutionsfor difficult problems. The crossover operator of the genetic

Contents lists available at ScienceDirect

journal homepage: www.elsevier.com/locate/swevo

Swarm and Evolutionary Computation

http://dx.doi.org/10.1016/j.swevo.2015.01.0022210-6502/& 2015 Elsevier B.V. All rights reserved.

n Corresponding author.E-mail address: [email protected] (C. Wu).

Swarm and Evolutionary Computation 22 (2015) 1–14

Page 2: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

algorithm can exploit structures of good solutions with respect todifferent objectives, which in return, creates new non-dominatedsolutions in unexplored parts of the Pareto frontier. In addition,multi-objective genetic algorithm does not require user to prior-itize, scale, or weight objectives. Therefore, the genetic algorithmis one of the most popular metaheuristic approaches for solvingmulti-objective optimization problems [18,24,36].

The first multi-objective optimization method based on thegenetic algorithm, called the vector evaluated GA (or VEGA), wasproposed by Schaffer [35]. Afterwards, several multi-objectiveevolutionary algorithms were developed, such as Multi-objectiveGenetic Algorithm (MOGA) [6], Niched Pareto Genetic Algorithm(WBGA) [15], Weight-based Genetic Algorithm (WBGA) [13], Ran-dom Weighted Genetic Algorithm (RWGA) [31], NondominatedSorting Genetic Algorithm (NSGA) [38], Strength Pareto Evolu-tionary Algorithm (SPEA) [52], improved SPEA (SPEA2) [51],Pareto-Archived Evolution Strategy (PAES) [21], Pareto Envelope-based Selection Algorithm (PESA) [3], Nondominated SortingGenetic Algorithm-II (NSGA-II) [5], Multi-objective EvolutionaryAlgorithm Based on Decomposition (MOEA/D) [1,46,47] andIndicator-Based Evolutionary Algorithm (IBEA) [34].

There are three basic issues [50] in solving multi-objectiveoptimization problems using the genetic algorithm:

1. Fitness: Solutions whose objective values are close to the realPareto frontier should be selected as parents of the nextgeneration. This gives rise to the task of ranking candidatesolutions in each generation.

2. Diversity: The obtained subset of Pareto solutions should dis-tribute uniformly over the real Pareto frontier. This reveals thetrue tradeoff of the multi-objective optimization problem fordecision-makers.

3. Elitism: The best candidate solution is always kept to the nextgeneration in solving single-objective optimization problemsusing the genetic algorithm. We extend this idea to the multi-objective case. However, this extension is not straightforwardsince a large number of candidate solutions are involved inmulti-objective genetic algorithm.

The aim of this paper is to introduce new techniques to tacklethe issues mentioned above. The rest of the paper is organized asfollows. In Section 2, we review some basic definitions of multi-objective optimization and the process of genetic algorithm. InSection 3, we propose an improved genetic algorithm for multi-objective optimization problems. In Section 4, an evaluationsystem for numerical performance of multi-objective geneticalgorithms is developed. Some simulation studies are carried outin Section 5. Section 6 concludes the paper.

2. Preliminaries

In this section, we first review some definitions and theoremsin the multi-objective optimization, and then introduce the gen-eral procedure of genetic algorithm.

2.1. Definitions in multi-objective optimization

First of all, we present the following notations which are oftenused in vector optimization. Given two vectors

x¼ ðx1; x2;…; xnÞT and y¼ ðy1; y2;…; ynÞT ARn;

then

� x¼ y3xi ¼ yi for all i¼ 1;2;…;n;� xoy3xioyi for all i¼ 1;2;…;n;

� xry3xiryi for all i¼ 1;2;…;n, and there is at least oneiAf1;2;…;ng such that xioyi, i.e., xay.

� x≦y3xiryi for all i¼ 1;2;…;n.

“4”, “Z” and “≧” can be defined similarly. In this paper, we callxry x dominates y or y is dominated by x.

Definition 2.1. Suppose that xDRn and xnAX. If xn≦x for anyxAX, then xn is called an absolute optimal point of X.

Absolute optimal point is an ideal point but it may not exist.

Definition 2.2. Let xARn and xnAX. If there is no xAX such that

xrxnðor xoxnÞ;then xn is called an efficient point (or weakly efficient point).

The sets of absolute optimal points, efficient points and weaklyefficient points of X are denoted as Xab, Xep and Xwp, respectively.For the problem MOP, XDRn is called the decision variable spaceand its image set FðXÞ ¼ fyARpjy¼ FðxÞ; xAXg �Rp is called theobjective function value space.

Definition 2.3. Suppose that xnAX. If

FðxnÞ≦FðxÞ;for any xAX, xn is called an absolute optimal solution of theproblem MOP. The set of absolute optimal solution is denotedas Sas.

The concept of the absolute optimal solution is a directextension of that for single-objective optimization. It is the idealsolution but may not exist for most cases.

Definition 2.4. Suppose that xnAX. If there is no xAX such that

FðxÞrFðxnÞðor FðxÞoFðxnÞÞ;i.e., FðxnÞ is an efficient point (or weakly efficient point) of theobjective function value space F(X), then xn is called an efficientsolution (or weakly efficient solution) of the problem MOP. The setsof efficient solutions and weakly efficient solutions are denoted asSes and Sws, respectively.

Another name of the efficient solution is Pareto solution, whichwas introduced by T.C. Koopmans in 1951 [22]. The meaning ofPareto solution is that, if xnASes, then there is no feasible solutionxAX, such that any fi(x) of F(x) is not worse than that of FðxnÞ. Inother words, xn is the best solution in the sense of “r”. Anotherintuitive interpretation of Pareto solution is that it cannot beimproved with respect to any objective without worsening at leastone of the other objectives. The set of Pareto solutions is denotedby Pn. Its image set FðPnÞ is called the Pareto frontier, denotedby PF n.

2.2. Genetic algorithm

Genetic algorithm is one of the most important evolutionaryalgorithms. It was introduced by John Holland in 1960s, and thendeveloped by his students and colleagues at the University ofMichigan between 1960s and 1970s [14]. Over the last twodecades, the genetic algorithm was increasingly enriched byplenty of literatures, such as [9,10,12,19]. Nowadays variousgenetic algorithms are applied in different areas; for example,mathematical programming, combinational optimization, auto-matic control and image processing.

Suppose that P(t) and O(t) represent parents and offspring ofthe tth generation, respectively. Then, the general structure ofgenetic algorithm can be described in the following pseudocode.

General structure of genetic algorithm.

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–142

Page 3: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

1 Initialization

1.1 Generate the initial population Pð0Þ,1.2 Set crossover rate, mutation rate and maximal

generation time,1.3 Let t’0.

2 While the maximal generation time is not reached, do

2.1 Crossover operator: generate O1ðtÞ,2.2 Mutation operator: generate O2ðtÞ,2.3 Evaluate O1ðtÞ and O2ðtÞ: compute the fitness function,2.4 Selection operator: build the next population,2.5 t’tþ1, go to 2.1

endend

From the pseudocode, we can see that there are three impor-tant operators in a general genetic algorithm: the crossoveroperator, mutation operator and selection operator. Usually adifferent encoding leads to different operators.

3. A multi-objective genetic algorithm

In this section, we present a genetic algorithm to solve Problem 1.We present this method through the three aspects illustrated inSection 1, i.e., fitness, diversity and elitism of solutions.

3.1. Fitness

For single-valued function, fitness is normally assigned as itsfunction value. However, to assign fitness of multi-objectivefunction is not straightforward. So far, there are three typicalapproaches. The first one is weight sum approach [31,32], whichconverts the multiple objectives into a single objective usingnormalized weight

Ppi ¼ 1 λi; λi40; i¼ 1;2;…; p. The decision of

weight parameters is not an easy task for this approach. Thesecond one is altering objective functions [35,51], which randomlydivides the current population into p equal sub-populations:P1; P2;…; Pp. Then, each solution in subpopulation Pi is assigneda fitness value based on objective function fi. In fact, this approachis a straightforward extension of the single-objective geneticalgorithm. The last one is Pareto-ranking approach [5,8,38], whichis a direct application of the definition of Pareto solution. Morespecifically, the population is ranked according to a dominancerule. Then, each solution is assigned a fitness value based on itsrank in the population rather than its actual objective functionvalue. In the following, we present a new rank strategy based onoptimal sequence method [25].

Definition 3.1. Let P ¼ f1;2;…; pg and N¼ f1;2;…;ng, for anyxi; xjAX, define

aijl ¼1; f lðxiÞo f lðxjÞ;0:5; f lðxiÞ ¼ f lðxjÞ;0; f lðxiÞ4 f lðxjÞ or i¼ j:

8><>: ð2Þ

Then,

aij ¼XlAP

aijl i; jAN

is called the optimal number of xi corresponding to xj for allobjectives. Furthermore, Ki ¼

PjANaij is defined as the total

optimal number of xi corresponding to all the other solutions forall objectives.

Obviously, for a minimization problem, a bigger optimal num-ber reflects a better point. Therefore, optimal numbers can beconsidered as basis for ranking the current population. Due to thisobservation, we propose the following algorithm to rank thecurrent population.

Algorithm 3.1. Optimal Sequence Method (OSM).

Step 1: Input: the current population and their objective functionvalues.

Step 2: Compute optimal numbers: compute all the optimalnumbers and total optimal numbers in Table 1 accordingto Eq. (2).

Step 3: Rank the solution: re-arrange the order of solutionsaccording to the decreasing order of the total optimalnumbers Ki. More precisely, denote

Kα1 ¼maxiAN

fKig;

Kα2 ¼ maxiANo fα1g

fKig

and so on. Then, the solutions xα1 , xα2 , etc., which arecorresponding to Kα1 , Kα2 , etc., are called the best solu-tion, the second best solution, etc. The new order is calledoptimal order.

It is worthy to mention that the optimal sequence method doesnot necessarily rank Pareto solutions in the frontier positions, butrank those solutions which are more reasonable in the foremostpositions. This is different from Pareto-ranking approach.

Lemma 3.1. Suppose that xi; xjAX. If FðxiÞrFðxjÞ, then aij4aji.

Proof. Let

A¼ flAP j f lðxiÞo f lðxjÞg; a¼ jAj ;

B¼ flAP j f lðxiÞ ¼ f lðxjÞg; b¼ jBj ;

where j � j denotes the number of components in a set. Obviously,we have a40, b40 and aþb¼ p. Then,

aijl ¼1; lAA

0:5; lAB;

(

and

ajil ¼0; lAA

0:5; lAB:

(

Therefore,

aij ¼XlAA

aijlþXlAB

aijl ¼ aþ0:5b

Table 1Table of optimal numbers.

xi xj

x1 x2 ⋯ xj ⋯ xn Ki

x1 0 a12 ⋯ a1j ⋯ a1n K1

x2 a21 0 ⋯ a2j ⋯ a2n K2

⋮ ⋮ ⋮ ⋮ ⋮ ⋮xi ai1 ai2 ⋯ aij ⋯ ain Ki

⋮ ⋮ ⋮ ⋮ ⋮ ⋮xn an1 an2 ⋯ anj ⋯ 0 Kn

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–14 3

Page 4: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

and

aji ¼XlAA

ajilþXlAB

ajil ¼ 0:5b:

Since a40, aij4aji. □

Lemma 3.2. Suppose that xi; xjAX. If FðxiÞrFðxjÞ; then, for anyxkðka i; j; kANÞ, aik4ajk.

Proof. For any xkðka i; j; kANÞ, let

A¼ flAP j f lðxkÞo f lðxiÞg; a¼ jAj ;

B¼ flAP j f lðxkÞ4 f lðxiÞg; b¼ jBj ;

C ¼ flAP j f lðxkÞ ¼ f lðxiÞg; c¼ jC j :Then, we have a; b; cZ0 and aþbþc¼ p.

� When lAA. Since FðxiÞrFðxjÞ, we have

f lðxkÞo f lðxiÞand

f lðxkÞo f lðxjÞ:Therefore,

a1ik ¼XlAA

aikl ¼ 0 � a¼ 0; ð3Þ

a1jk ¼XlAA

ajkl ¼ 0 � a¼ 0: ð4Þ

� When lAB. We have f lðxkÞ4 f lðxiÞ. Thus,a2ik ¼

XlAB

aikl ¼ 1 � b¼ b: ð5Þ

On the other hand, let

M1 ¼ flABj f lðxkÞo f lðxjÞg; m1 ¼ jM1 j ;

M2 ¼ flABj f lðxkÞ ¼ f lðxjÞg; m2 ¼ jM2 j ;

M3 ¼ flABj f lðxkÞ4 f lðxjÞg; m3 ¼ jM3 j :Obviously, m1;m2;m3Z0 and m1þm2þm3 ¼ b. Therefore,

a2jk ¼m1 � 0þm2 � 0:5þm3 � 1rb: ð6Þ

� When lAC. We have

a3ik ¼XlAC

aikl ¼ 0:5 � c: ð7Þ

On the other hand, let

M1 ¼ flAC j f lðxkÞo f lðxjÞg; m1 ¼ jM1 j ;

M2 ¼ flAC j f lðxkÞ ¼ f lðxjÞg; m2 ¼ jM2 j :Note that it is not possible to have f lðxkÞ4 f lðxjÞ. Otherwise, wewill have f lðxiÞ4 f lðxjÞ, which is contrary to FðxiÞrFðxjÞ. Again,we have m1;m2Z0 and m1þm2 ¼ c. Thus,

a3jk ¼XlAC

ajkl ¼m1 � 0þm2 � 0:5r0:5c: ð8Þ

In light of Eqs. (3)–(8), we have

aik ¼X3α ¼ 1

ααikZajk ¼X3α ¼ 1

aαjk:

Theorem 3.1. If Ke ¼maxiANfKig, then the solution xe correspondingto Ke must be an efficient solution (Pareto solution).

Proof. If xe is not an efficient solution, then there exists an xsAX,such that

FðxsÞrFðxeÞ:Therefore, according to Lemma 3.1,

aesoase: ð9ÞDue to Lemma 3.2, we have, for any xjðjAN; jae; sÞ,aejrasj: ð10ÞBased on (9), (10) and aee ¼ ass ¼ 0, we have

Ke ¼XjAN

aejoXjAN

asj ¼ Ks;

which is a contradiction to the assumption ke ¼maxiANfKig. Theproof is completed. □

Based on Theorem 3.1, the following results are obvious.

Corollary 3.1. If xa is an absolute optimal solution, then

Ka ¼maxjAN

fKjg:

Corollary 3.2. If xe is corresponding to Ke ¼maxjANfKjg and xs iscorresponding to Ks ¼maxjANo fegfKjg, then xs is an efficient solutionof the population without xe.

Theorem 3.1 and Corollary 3.2 reveal the rationality of optimalsequence method, because the best solution must be an efficientsolution, and the second best solution, although may not be anefficient solution, is an efficient solution without considering thebest one. This procedure can be continued until the last solution isobtained. It is reasonable for us to rank the population based ontheir total optimal number and assign fitness according to theranking index.

3.2. Diversity

In order to obtain uniformly distributed solutions, it is impor-tant to maintain the diversity of populations. Without diversity-maintaining measures, the population tends to form relatively fewclusters in the multi-objective genetic algorithm. This phenom-enon is known as genetic drift. Several approaches have beendevised to prevent it. Fitness sharing [6,11] encourage the searchin unexplored sections of a Pareto frontier by artificially reducingfitness of solutions in densely populated areas. To achieve thisgoal, densely populated area is identified and a penalty method isincorporated so as to distribute the solutions located in theseareas. The difficulty of this method is how to identify the denseareas and measure their density. Crowding distance approach [5]aims to obtain a uniform spread of solutions along the best-knownPareto frontier. The advantage of this approach is that themeasurement of population density around a solution can becomputed without requiring a user-defined parameter. Cell-based density approach [20,21,30,44] divides the objective spaceinto p-dimensional cells. The number of solutions in each cell isdefined as the density of the cell, and the density of a solution isequal to the density of the cell in which the solution locates. Anefficient approach to dynamically divide the objective functionspace into cells is proposed by Lu and Yen [30,44]. The mainadvantage of this approach is that a global density map of theobjective function space is obtained as a result of the densitycalculation while keeping similar computational efficiency asother approaches.

In this paper, we design a new selection operator based on thecombination of the cell-based density approach and the Pareto-ranking approach. This selection operator not only maintains the

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–144

Page 5: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

density of solutions but also guarantees that all the Paretosolutions are reserved to the next generation.

Algorithm 3.2. Cell-based Pareto selection operator.

Step 1: Input the current ranked (solutions are firstly sorted byAlgorithm 3.1 ) population P and their correspondingobjective function values F(P). Set α as the number ofsections for each dimension.

Step 2: Separate each dimension of the objective function spaceinto α sections using width

w¼ zmaxk �zmin

k

α; k¼ 1;2;…;p;

where zmaxk and zmin

k are the maximum and minimumvalues of the objective function k in the current popula-tion, respectively.

Step 3: Classify objective function values F(P) into cells, andmeasure the density of each cell and the density of eachpoint in F(P).

Step 4: Select solutions one by one according to the followingsubsteps until population is fully filled.

Step 4.1: Successively check each cell, if the density of a cell is notzero, then choose the first best solution (the order isalready sorted by optimal sequence method) in the cell.Let the set E be the collection of all the solutions selected.

Step 4.2: Identify Pareto frontier of the set E and collect them in aset denoted by PE.

Step 4.3: Maintain the solutions corresponding to points in PE (i.e.,S¼ fxAEjFðxÞ ¼ y; yAPEg) to the next generation.

Step 4.4: Meanwhile, for each point in PE, reduce the density ofthe cell in which it locates by 1, marking this point ashaving been selected (which means that it cannot beconsidered as a candidature again when the loop goesback to Step 4.1).

Step 4.5: Check the number of solutions that is kept in the nextgeneration. If the number reaches the population size,then stop and return; otherwise, go to Step 4.1.

Remark 3.1. There is only one user-determined parameter αwhich determines the size of cells in this selection operator.Evidently, a large α leads to a small size of cells. Thus, theenlargement of α will increase the diversity of population, butwill lead to a heavy computational burden. For simplicity, we fixthe number of sections for all dimensions in the objective functionvalue space, instead of applying different numbers of sections foreach dimension [30,44].

Remark 3.2. In Step 4.1, the solutions are not chosen strictlyaccording to their optimal number, since those solutions who havehigh optimal number may locate in a same cell. Instead, only thebest solution in that cell can be selected. However, we canguarantee that each solution which is chosen in the correspondingcell is the most reasonable one.

Remark 3.3. In Step 4.2, only the efficient points (Pareto points) inE are maintained to the next generation. During the selectionprocess, we not only guarantee diversity of the next generation butalso ensure that the solutions selected to be parents of the nextgeneration are better ones.

To illustrate the procedure of Algorithm 3.2, let us investigate asimple example. Fig. 1 shows a population of solutions and theiroptimal sequence orders assigned by using Algorithm 3.1. Thesolutions are classified into two dimensional cells. Clearly, some

cells include many solutions, such as cells A, B and C, while othersinclude few solutions, such as E, D and F. Then, we select the bestpoint from each cell and collect them constructing a candidatureset E which is depicted in Fig. 2. It is easy to note that for thosecells who have more than one solutions, only the one has the bestoptimal number can be selected. For instance, solutions 1, 3 and6 are selected from cells A, B and C, respectively. Whereas, forthose cells who have only one solution, the solution is selecteddirectly, such as solutions 22, 29 and 30 from cells D, E and F,respectively. Finally, Pareto frontier points of set E are identifiedand maintained to the next generation (see Fig. 3). We can seefrom the figure that the obtained solutions are not only bettersolutions but also well distributed.

Suppose that the dimension of the vector function F is p, thepopulation size is N, then for each component function of F, oneneeds 1

2N2 times of compare, so the computation complexity of

Algorithm 3.1 is OðpN2Þ. In Algorithm 3.2, in order to allocate a cellfor one solution, one needs αp times of compare. This is becausefor each dimension of F, we need α times of comparison in theworst case, while the total dimensions of F is p. Note that there areN individuals in a population, so the computational complexity ofAlgorithm 3.2 is OðαpNÞ.

3.3. Elitism

Elitism in the single-objective genetic algorithmmeans that thebest individual in a population always survives to the next

Fig. 1. Optimal numbers and cell-based classifying of the population.

Fig. 2. Choose the best point of each cell and collect them as a candidature set E.

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–14 5

Page 6: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

generation. In the process of solving multi-objective optimizationproblems using the genetic algorithm, all the nondominatedsolutions should be considered as elite solutions. Thus, thenumber of possible elite solutions becomes very large. The earlymulti-objective genetic algorithm did not use elitism at all. Sincethe genetic algorithm with elitist strategies outperforms thosewithout using it, the elitism is incorporated in the recent multi-objective genetic algorithms and their variations [4,42,52]. Forthese methods, two strategies are introduced to implement elitism[17]: (i) maintaining elitist solutions in the population, and (ii)storing elitist solutions in an external secondary list and reintro-ducing them to the population by a certain strategy.

Our strategy to implement elitism is simple. We use an extraset, say B, to store the best populations we have achieved so far. Ateach iteration, we put the population obtained from last genera-tion together with B and construct a new set, say C. Then, optimalsequence method (Algorithm 3.1) is applied to rank all thesolutions in C. Finally, we update the best population B bychoosing the first population size of best solutions in ranked C.This elitism strategy is summarized in the following algorithm.

Algorithm 3.3. Optimal sequence elitism.

Step 1: Input the current population P obtained by the selectionoperator (Algorithm 3.2); the current best population B;the population size.

Step 2: Let C ¼ B [ P. Apply the optimal sequence method (Algo-rithm 3.1) on set C. Denote the ranked set as C 0.

Step 3: Update B by assigning it with the first population size ofbest solution in C 0.

Remark 3.4. The initial best population B is equal to the initialpopulation.

3.4. Optimal sequence multi-objective GA (OSMGA)

Based on the algorithms provided above, we are ready topresent the improved multi-objective GA:

Algorithm 3.4. Optimal sequence multi-objective GA (OSMGA).

1 Initialization

1.1 Generate the initial population Pð0Þ,1.2 Initialize the best population B¼ Pð0Þ,

1.3 Set crossover rate, mutation rate and maximal genera-tion number,

1.4 Let t’0.

2 While the maximal generation number is not reached, do

2.1 Crossover operator: generate O1ðtÞ,2.2 Mutation operator: generate O2ðtÞ,2.3 Evaluate objective function values: construct selection

pool:

SðtÞ ¼O1ðtÞ [ O2ðtÞand compute their multi-objective function values:

FSðtÞ ¼ fFðxÞjxASðtÞg:2.4 Rank solutions in selection pool: apply optimal sequence

method (Algorithm 3.1) to rank the solution in selectionpool S(t).

2.5 Selection operator: apply the cell-based Pareto selectionoperator (Algorithm 3.2) to construct the parents popu-lation Pðtþ1Þ.

2.6 Elitism: update the best population B by applying optimalsequence elitism (Algorithm 3.3),

2.7 t’tþ1, Go to 2.1End

End

4. An evaluation system for MOGAs

In order to show the improvement of a new method, it isnecessary to compare the numerical performance of this methodwith others. However, for the multi-objective genetic algorithm,the numerical performance evaluation is not an evident task,because the multi-objective genetic algorithm gives a populationof approximate Pareto solutions, so the comment has to beanalyzed from the performances of all the solutions but not anyone of them. Therefore, it is needed to develop a reasonableevaluation system for multi-objective genetic algorithms. In thissection, we develop an evaluation system to evaluate the numer-ical performance of different MOGAs.

As it is pointed out by Deb [4], the multi-objective optimizationis a mixture of two optimization strategies:

(i) minimizing the distance of the obtained solutions from thetrue optimal solutions,

(ii) maximizing the spread of solutions in the obtained set ofsolutions.

The former task is similar to the usual task in a single-objectiveoptimization, while the latter one is similar to finding multipleoptimal solutions in a multi-modal optimization. Based on thisunderstanding, we will develop a system to evaluate the distancebetween the image set of obtained solutions and the real Paretofrontier in the objective function value space; and furthermore,the distributional degree of the obtained solutions.

In the following, we provide two performance metrics whichare correspondence to the closeness and diversity of the image setof the obtained solutions. As a matter of convenience, we use Q torepresent the solution set obtained from a multi-objective geneticalgorithm.

4.1. Closeness metric

Veldhuizen [41] introduced a metric called Error Ratio whichsimply counts the number of solutions in Q which does not belongto the Pareto frontier PF n and then calculate the ratio of it respect

Fig. 3. Identify Pareto frontier PE of the candidature set E.

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–146

Page 7: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

to the total number of solutions. Mathematically,

ER¼Pj Q j

i ¼ 1 eijQ j ; ð11Þ

where

ei ¼1 if xiAQ and xiAPF n

0 if xiAQ and xi =2PF n

(

and jQ j represents the cardinal number of set Q. This error ratiometric is simple and evident. However, the requirement of xi thatexactly belongs to PF n is too strict. From the formula (11), ei¼0even xi is very close to but not belonging to PF n, which cannotrepresent the suitable closeness between real Pareto frontier andthe image set of the obtained solutions. Although a threshold δwas introduced in some later literatures, the determination of δ isstill a problem.

In our modified error ratio, we introduce the following func-tion:

ei ¼ e�αdi ; i¼ 1;2;…; jQ j ;where αA ½1;4� is a parameter and di is the closest distancebetween the point and the real Pareto frontier. Mathematically,

di ¼ minP0 APF n

dðxi; P0Þ:

The distance function dð�; �Þ could be the Euclidian distance. Andeventually, the modified error ratio is still expressed as

MER¼Pj Q j

i ¼ 1 eijQ j :

Since diA ½0; þ1Þ, eiAð0;1�; which makes MERAð0;1�. A biggerMER means the whole solution set is closer to the real Paretofrontier. Note that for some problems, especially those whoseoptimal Pareto frontier is unknown, the calculation of distance dicould be difficult, even impossible. However, for the test problemsin this paper, their Pareto frontiers are clear.

4.2. Diversity metric

For the diversity of solutions, we expect the solution spread asuniform and extensive as possible. Suppose that we uniformly gridthe objective function space into many cubes whose volume is thesame. Then it is evident that all the solutions distribute in differentcubes. But based on the diversity of population, some cubes mayhave no solution, some cubes may have just one solution and somecube may have two or more than two solutions. For the cubeshaving only one solution, we count this solution as a individualone. However, for those cubes having two or more than twosolutions we can say that those solutions are very close to eachother which is not a good distribution. Thus, we count all of thosesolutions just as one. Then we calculate the ratio between thenumber of cubes having solutions and the total number ofsolutions; and this ratio is used as an evaluation index of thediversity of solutions. The process of this cell-based diversitymetric is introduced in detail as follows:

Cell-based diversity metric:

Step 1: Data: input the upper bound (ub) and the lower bound(lb) of the Pareto frontier; set a parameter β as thenumber of section for each dimension.

Step 2: Grid the objective function space: divide each dimensionof objective function space into β sections from lb to ub.This grids the objective function space into a cell spaceand then assign each cube an indicator which originallyset to be 0.

Step 3: Consecutively check each solution from the population,set the indicator of a cell to be 1 if there is one or moresolutions located in the cell.

Step 4: Count the number of cells whose indicator is 1 andassign the number to M, then the diversity metric isevaluated as

R¼ MjQ j :

Remark 4.1. It is evident to note that this diversity metric issensitively influenced by the number of sections β. First of all, thegrid of objective function value space should not be two tiny (β ischosen as a large number), otherwise, almost every solution canlocate in a different cell which is not good for the evaluation ofdiversity. On the other hand, the grid should not be too rough (β ischosen as a small number) as well, because in this way, manysolutions, even they are uniformly distributed may be put into asame cell, consequently, counted as just one solution. It is reason-able to set β equal or a little bit larger than the population size. Inthe way, the ratio M can be guaranteed between 0 and 1.

For a certain number of solutions, it is obvious that a larger Rimplies that these solutions distribute in more different cells,which means a better diversity is obtained. Fig. 4 illustrates anexample of cell-based diversity metric for a hypothetical problemwith 20 solutions in total. From the picture, two or more than twosolutions distribute in cells A, B, C, D and E, so they are justcounted as 5 individuals; plus the other 8 cells who has just1 solution each. Then, the diversity metric for this set of solutionsis M¼ 13=20¼ 0:65.

5. Numerical experiments

In this section, we investigate the numerical performance ofOSMGA. First, we apply OSMGA on two multi-objective bench-marks quoted from [5]. Then, we compare OSMGA with themethods proposed in CEC'09 [48] using the test problems pro-posed for CEC'09 [49]. All the numerical experiments are imple-mented in an environment of MATLAB(2010a) installed on anACER ASPIRE 4730Z laptop with a 2 G RAM and a 2.16 GB CPU.

Fig. 4. An example for cell-based diversity metric.

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–14 7

Page 8: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

5.1. Numerical performance of OSMGA

The test problems (see Table 2) applied in this subsection arequoted from [5]. Problem SCH is a one dimensional convexproblem, its Pareto solutions are xA ½0;2�. Problem FON is a threedimensional nonconvex problem, its Pareto solutions satisfyx1 ¼ x2 ¼ x3, where xiA ½�1=

ffiffiffi3

p;1=

ffiffiffi3

p�; i¼ 1;2;3. Fig. 5 depicts

the objective function value space of Problems SCH and FON (thered curves represent Pareto frontiers). From the figure, one canobserve that their Pareto frontiers are connective.

We solve the proposed benchmarks by OSMGA and otherMOGAs, such as VEGA [35], NSGA [38], NSGAII [5], and NPGA [15].The results are compared using the evaluation system developed inthe previous section. In the implementation of OSMGA, parametersare adjusted according to the scale of problem. Empirically, if thedimension of the problem is n, then the population size is 2n–5n,the number of maximal generation is 20n–50n. the crossover rateand the mutation rate are 0:4–0:5 and 0:2–0:3, respectively. InAlgorithm 3.1, we use 10–15 for the number of sections in eachdimension. In the following tables, the following notations are used:

� MER—the modified error ratio of Pareto solutions;� ave. MER—the average of modified error ratio;� std. MER—the standard deviation of modified error ratio;� R—the diversity metric of Pareto solutions;� ave. R—the average of diversity metric;� std. R—the standard deviation of diversity metric.

For each problem, we run OSMGA independently for 100 times.Table 3 demonstrates the statistic results of the experiments. From

the table, the average of modified error ratio is very close to 1 andits standard deviation is almost equal to 0, which reveals that forthe considered benchmarks, the approximate Pareto frontierobtained by OSMGA is steadily located on the real Pareto frontier.The average and the standard deviation of the diversity metric ofProblem SCH's solutions are 0.50 and 0.11, respectively, which arenot very good; however, for Problem FON, they are 0.83 and 0.08,respectively, which are very promising.

Fig. 6 depicts some intermediate results of Problems SCH andFON solved by OSMGA. From the figures, we can observe thatOSMGA converges very fast in solving these two problems. ForProblem SCH, the range of Pareto frontier is located at the thirdgeneration and the implementation terminates at the tenth gen-eration. For Problem FON, the range of Pareto frontier is located atthe 19th generation and the simulation terminates at the 30thgeneration. The obtained approximate Pareto frontier for bothproblems is accurate and uniformly distributed.

Table 4 illustrates numerical results of Problem SCH solved byOSMGA, VEGA, NSGA, NSGAII and NPGA for 10 independenceimplementations. From the table, one can observe that theapproximate Pareto frontiers obtained by different methods areall very close to the real Pareto frontier. However, the diversity

Table 2Multi-objective test problems.

Pro. n Variable bounds Objective functions Optimal solutions Comments

SCH 1 ½�5;10� f 1ðxÞ ¼ x2

f 2ðxÞ ¼ ðx�2Þ2xA ½0;2� convex

FON 3 ½�4;4�f 1ðxÞ ¼ 1�exp �P3

i ¼ 1 xi�1ffiffiffi3

p� �2

!

f 2ðxÞ ¼ 1�exp �P3i ¼ 1 xiþ

1ffiffiffi3

p� �2

!x1 ¼ x2 ¼ x3A ½�1=

ffiffiffi3

p;1=

ffiffiffi3

p�

nonconvex

0 2 4 6 8 10 12 14 160

1

2

3

4

5

6

7

8

9

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1a b

Fig. 5. Objective function value space of Problems (a) SCH and (b) FON.

Table 3Numerical performance of OSMGA.

Pro. ave. MER std. MER ave. R std. R

SCH 0.99 0.00 0.50 0.11FON 0.99 0.00 0.83 0.08

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–148

Page 9: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

metric of OSMGA is evidently larger than the other methods. It isnot uncommon to see that diversity metric of VEGA is lower thanthe other four methods. Actually, VEGA, as the first multi-objectivegenetic algorithm, did not take account of diversity. The method ofNSGAII, although not very stable, enjoys a modest diversity metric.Fig. 7(a) depicts the best implementation for each methods. Wecan observe that the approximate Pareto solutions obtained bythese methods are close to the real Pareto frontier. This fits with

the data in Table 4. However, solutions obtained by VEGA, NSGAand NPGA are very concentrative, while solutions obtained byOSMGA and NSGAII are better distributed.

Problem FON is a nonconvex multi-objective optimizationproblem whose Pareto solutions satisfy x1 ¼ x2 ¼ x3A ½�1=

ffiffiffi3

p;

1=ffiffiffi3

p�. Table 5 illustrates 10 independent implementations of

OSMGA,VEGA, NSGA, NSGAII and NPGA on Problem FON. Fromthe table, we can observe that solutions obtained by OSMGA,

0 5 100

5

10SCH in generation 1

0 2 40

5SCH in generation 2

0 2 40

2

4SCH in generation 3

0 2 40

2

4SCH in generation 4

0 2 40

2

4SCH in generation 5

0 2 40

2

4SCH in generation 6

0 2 40

2

4SCH in generation 7

0 2 40

2

4SCH in generation 8

0 2 40

2

4SCH in generation 10

0.7 0.8 0.90

0.5

1FON in generation 1

0 0.5 10

0.5

1FON in generation 4

0 0.5 10

0.5

1FON in generation 8

0 0.5 10

0.5

1FON in generation 11

0 0.5 10

0.5

1FON in generation 15

0 0.5 10

0.5

1FON in generation 19

0 0.5 10

0.5

1FON in generation 22

0 0.5 10

0.5

1FON in generation 26

0 0.5 10

0.5

1FON in generation 30

Fig. 6. Intermediate results Problems (a) SCH and (b) FON solved by OSMGA.

Table 4Comparison among OSMGA and other algorithms on problem SCH.

Implementation index OSMGA VEGA NSGA NSGAII NPGA

MER R MER R MER R MER R MER R

1 0.99 0.53 0.99 0.06 0.99 0.33 0.99 0.23 0.99 0.332 0.99 0.43 0.99 0.13 0.99 0.33 0.99 0.33 0.99 0.333 0.99 0.60 0.99 0.06 0.99 0.33 0.99 0.06 0.99 0.334 0.99 0.56 0.99 0.13 0.99 0.33 0.99 0.43 0.99 0.335 0.99 0.46 0.99 0.13 0.99 0.33 0.98 0.13 0.99 0.336 0.99 0.46 0.99 0.13 0.99 0.33 0.98 0.06 0.99 0.337 0.99 0.40 0.99 0.13 0.67 0.33 0.99 0.06 0.99 0.338 0.99 0.54 0.99 0.13 0.99 0.33 0.99 0.30 0.99 0.339 0.99 0.70 0.99 0.06 0.99 0.33 0.99 0.03 0.99 0.33

10 0.99 0.30 0.99 0.06 0.41 0.33 0.98 0.43 0.99 0.33

0 0.5 1 1.5 2 2.5 3 3.5 4 4.50

0.5

1

1.5

2

2.5

3

3.5

4

4.5VEGANSGANSGAIINPGAOSMGA

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1VEGANSGANSGAIINPGAOSMGA

Fig. 7. Numerical comparison of MOGAs on Problems (a) SCH and (b) FON.

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–14 9

Page 10: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

NSGAII and NPGA are all very close to real Pareto frontier, some ofthe solutions obtained by VEGA are not very good, the solutionsobtained by NSGA are failed. For the diversity metric, comparisonamong OSMGA, NSGAII and NPGA, OSMGA enjoys the bestperformance, the solutions obtained by OSMGA are well distrib-uted along the Pareto frontier. Whereas, most of the solutionsobtained by NSGAII distribute on the boundary and solutionsobtained by NPGA distribute in the middle of the real Paretofrontier but very close to each other. Fig. 7(b) shows the bestperformance for each method among the 10 implementations. Wecan observe that the figure is identical with the table data.

5.2. Numerical comparison

In this subsection, we compare the numerical performance ofOSMGA with the methods proposed in the special session onperformance assessment of unconstrained/bound constrainedmulti-objective optimization algorithms at CEC'09. There are 13algorithms submitted to the special session:

1. MOEAD [47];2. GDE3 [23];3. MOEADGM [1];

4. MTS [40];5. LiuLiAlgorithm [26];6. DMOEADD [27];7. NSGAIILS [37];8. OWMOSaDE [16];9. ClusteringMOEA [43];

10. AMGA [39];11. MOEP [33];12. DECMOSA-SQP [45];13. OMOEAII [7].

The test problems applied in this subsection are quoted from [49]and these are used as benchmarks in CEC'09. Fig. 8 illustrates theobjective function value space of these test problems (the figure oftest problem 2 is ignored here since it is similar to that of the testproblem 1), the red curve/surface represents their Pareto frontiers.Among these test problems, Problems 1–7 have two objectivefunctions, whereas Problems 8–10 have three objective functions.The Pareto solutions of Problems 5, 6 and 9 are disconnected,while the others are connected.

In order to evaluate the numerical performance, we use theperformance metric IGD proposed in [49]. Suppose that Pn is a set

Table 5Comparison among VEGA, NSGA, NSGAII and NPGA on problem FON.

Implementation index OSMGA VEGA NSGA NSGAII NPGA

MER R MER R MER R MER R MER R

1 0.99 0.37 0.79 0.04 0.84 0.01 0.99 0.67 0.99 0.022 0.99 0.35 0.76 0.02 0.81 0.01 0.98 0.57 0.99 0.043 0.99 0.30 0.69 0.02 0.74 0.01 0.99 0.33 0.99 0.064 0.99 0.25 0.70 0.04 0.85 0.01 0.98 0.46 0.99 0.085 0.99 0.30 0.90 0.02 0.69 0.01 0.99 0.07 0.99 0.056 0.99 0.53 0.89 0.04 0.88 0.01 0.98 0.48 0.99 0.057 0.99 0.30 0.88 0.04 0.75 0.01 0.98 0.54 0.99 0.078 0.99 0.50 0.89 0.04 0.79 0.01 0.97 0.72 0.99 0.049 0.99 0.28 0.89 0.02 0.89 0.01 0.98 0.35 0.99 0.07

10 0.99 0.34 0.89 0.04 0.76 0.01 0.97 0.58 0.99 0.05

0 0.5 10

0.5

1

Problem 1

0 0.5 10

0.5

1

Problem 3

0 0.5 10

0.5

1

Problem 4

0 0.5 10

0.5

1

Problem 5

0 0.5 10

0.5

1

Problem 6

0 0.5 10

0.5

1

Problem 7

0 0.5 10

0.510

0.5

1

Problem 8

0 0.5 100.51

0

0.5

1

Problem 9

00.5 1 00.51

0

0.5

1

Problem 10

Fig. 8. The objective function value space for test problems.

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–1410

Page 11: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

of uniformly distributed points along the Pareto frontier (in theobjective function value space). Let A be a set of solutions obtainedby a certain solver. Then, the average distance from Pn to A isdefined as

IGDðA; PnÞ ¼P

vAPndðv;AÞjPn j ;

where dðv;AÞ is the minimum Euclidean distance between v andthe points in A, i.e.,

dðv;AÞ ¼minyAA

Jv�yJ :

In fact, Pn represents a sample set of the real Pareto frontier, if jPn jis large enough to approximate the Pareto frontier very well,

Table 6The numerical performance evaluated by IGD.

Rank UF1 UF2 UF3

1 MOEAD 0.00435 MTS 0.00615 MOEAD 0.007422 GDE3 0.00534 MOEADGM 0.0064 LiuLiAlgorithm 0.014973 MOEADGM 0.0062 DMOEADD 0.00679 OSMGA 0.022414 MTS 0.00646 MOEAD 0.00679 DMOEADD 0.033375 LiuLiAlgorithm 0.00785 OWMOSaDE 0.0081 MOEADGM 0.0496 DMOEADD 0.01038 GDE3 0.01195 MTS 0.05317 NSGAIILS 0.01153 LiuLiAlgorithm 0.0123 ClusteringMOEA 0.05498 OWMOSaDE 0.0122 NSGAIILS 0.01237 AMGA 0.069989 OSMGA 0.02868 AMGA 0.01623 DECMOSA-SQP 0.093510 ClusteringMOEA 0.0299 OSMGA 0.01815 MOEP 0.09911 AMGA 0.03588 MOEP 0.0189 OWMOSaDE 0.10312 MOEP 0.0596 ClusteringMOEA 0.0228 NSGAIILS 0.1060313 DECMOSA-SQP 0.07702 DECMOSA-SQP 0.02834 GDE3 0.1063914 OMOEAII 0.08564 OMOEAII 0.03057 OMOEAII 0.27141

Rank UF4 UF5 UF6

1 MTS 0.02356 MTS 0.01489 MOEAD 0.005872 GDE3 0.0265 GDE3 0.03928 MTS 0.059173 DECMOSA-SQP 0.03392 AMGA 0.09405 DMOEADD 0.066734 AMGA 0.04062 LiuLiAlgorithm 0.16186 OMOEAII 0.073385 DMOEADD 0.04268 DECMOSA-SQP 0.16713 ClusteringMOEA 0.08716 MOEP 0.0427 OMOEAII 0.1692 MOEP 0.10317 LiuLiAlgorithm 0.0435 MOEAD 0.18071 DECMOSA-SQP 0.126048 OMOEAII 0.04624 MOEP 0.2245 AMGA 0.129429 MOEADGM 0.0476 ClusteringMOEA 0.2473 OSMGA 0.1406210 OWMOSaDE 0.0513 DMOEADD 0.31454 LiuLiAlgorithm 0.1755511 OSMGA 0.05392 OWMOSaDE 0.4303 OWMOSaDE 0.191812 NSGAIILS 0.0584 NSGAIILS 0.5657 GDE3 0.2509113 ClusteringMOEA 0.0585 OSMGA 0.71407 NSGAIILS 0.3103214 MOEAD 0.06385 MOEADGM 1.7919 MOEADGM 0.5563

Rank UF7 UF8 UF9

1 MOEAD 0.00444 MOEAD 0.0584 OSMGA 0.047622 LiuLiAlgorithm 0.0073 DMOEADD 0.06841 DMOEADD 0.048963 MOEADGM 0.0076 LiuLiAlgorithm 0.08235 NSGAIILS 0.07194 DMOEADD 0.01032 NSGAIILS 0.0863 MOEAD 0.078965 MOEP 0.0197 OWMOSaDE 0.0945 GDE3 0.082486 NSGAIILS 0.02132 MTS 0.11251 LiuLiAlgorithm 0.093917 ClusteringMOEA 0.0223 AMGA 0.17125 OWMOSaDE 0.09838 DECMOSA-SQP 0.02416 OMOEAII 0.192 MTS 0.114429 GDE3 0.02522 OSMGA 0.21083 DECMOSA-SQP 0.1411110 OMOEAII 0.03354 DECMOSA-SQP 0.21583 MOEADGM 0.187811 MTS 0.04079 ClusteringMOEA 0.2383 AMGA 0.1886112 AMGA 0.05707 MOEADGM 0.2446 OMOEAII 0.2317913 OWMOSaDE 0.0585 GDE3 0.24855 ClusteringMOEA 0.293414 OSMGA 0.13056 MOEP 0.423 MOEP 0.342

Rank UF10

1 MTS 0.153062 OSMGA 0.248213 DMOEADD 0.322114 AMGA 0.324185 MOEP 0.36216 DECMOSA-SQP 0.369857 ClusteringMOEA 0.41118 GDE3 0.433269 LiuLiAlgorithm 0.4469110 MOEAD 0.4741511 MOEADGM 0.564612 OMOEAII 0.6275413 OWMOSaDE 0.74314 NSGAIILS 0.84468

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–14 11

Page 12: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

IGDðA; PnÞ could measure both the diversity and convergence of Ain a sense. A smaller IGDðA; PnÞ means the set A is closer to the realPareto frontier and has better diversity.

In order to keep consistent with the final report of CEC'09 [48],in the implementation of OOMOGA, we set the population size as100 for problems with two objectives and 150 for problems withthree objectives, the number of function evaluations is less than300;000. For each test instance, we run OOMOGA independentlyfor 30 times. The numerical performance evaluated by IGD areillustrated in Table 6.

From Table 6, the proposed method OSMGA performs the bestat solving Problem 9; its IGD evaluation is 0.04762, better than allthe other solvers. In solving Problem 10, OSMGA (whose IGDevaluation is 0.24821) performs only worse than MTS (whose IGDevaluation is 0.15306) but better than all the other solvers. Insolving Problem 3, the IGD evaluation of OSMGA (0.02241) isranked thirdly, worse than MOEAD (0.0072) and LiuLiAlgorithm(0.01497). The numerical performance of OSMGA is moderate insolving Problems 1, 2, 4, 6 and 8, the IGD ranks are 9, 10, 11, 9 and9, respectively. For Problems 5 and 7, the numerical performanceof OSMGA is not so good, ranking in 13 and 14, respectively.

It is not uncommon that the numerical performance of theproposed solver OSMGA is unstable among different test pro-blems. Because the numerical results are not only affected by theperformance of solvers, but also impacted by the linearity andstructure of objective function themselves. Furthermore, the cross-over and mutation operators are also affected by the distributionof points in the objective function value space. Generally speaking,

if a new point generated by the crossover or mutation operatorshas a higher probability of locating around the Pareto frontier,then the Pareto frontier can be well approximated by the solver,for example, Problems 3, 4 and 9. On the contrary, if it is hard forthe crossover or mutation operators to generate new point aroundthe Pareto frontier, then the problem is difficult to be solved byMOGAs, for instance, Problems 5, 6 and 7. In fact, this instabilitystill appears in the other solvers, such as MOEAD, which isreported as the best solver in [48]. The IGD evaluation of MOEADin solving Problems 4, 5 and 10 is not very good, ranking in 14,7 and 10, respectively.

Fig. 9 demonstrates Pareto frontiers of Problems 1, 2, 4 and 9,respectively. From Fig. 9(a) and (b), we can observe that forProblems 1 and 2, the proposed solver obtained very goodrepresentations of their Pareto frontiers. Results for Problem 4(see Fig. 9(c)) are not very uniformly distributed, which may affectthe performance of IGD evaluation. Fig. 9(d) illustrates the approx-imate Pareto frontier of Problem 9, we can see that the structure ismore or less an approximation of the real Pareto frontier.

6. Conclusion

In this paper, we proposed an improved genetic algorithm forsolving multi-objective optimization problems. The proposedmethod is abbreviated as OSMGA. In each iteration, the populationis first ranked by the optimal sequence method. The fitness of eachsolution is assigned using the rank. The diversity of a population is

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1

0

0.5

10

0.2

0.4

0.6

0.8

1

1.2

1.4

Fig. 9. The approximated Pareto frontier of Problems (a) 1, (b) 2, (c) 4 and (d) 9.

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–1412

Page 13: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

measured using a method combining Pareto-based ranking andcell-based diversity. Then, parents for the next generation areselected according to their fitness value and diversity measure-ment. In the proposed method, the best population is updated ateach iteration such that the image of solutions is diverselydistributed along the real Pareto frontier. In order to comparethe numerical performance of different multi-objective geneticalgorithms, we developed an evaluation system which takesaccount of the closeness metric and diversity metric. We testedOSMGA by the two well-known multi-objective benchmarks,results show that OSMGA is promising in solving small scaleconvex multi-objective optimization problems. We also appliedOSMGA on the famous test instances proposed in CEC'09 andcompared its numerical results with the methods proposed inCEC'09. The rank of OSMGA shows that it is efficient and robust insolving unconstrained multi-objective optimization problems.

Acknowledgments

Changzhi Wu was partially supported by Australian ResearchCouncil Linkage Program (Grant No. LP140100873), Natural ScienceFoundation of China (61473326), Natural Science Foundation ofChongqing (cstc2013jcyjA00029 and cstc2013jjB0149).

This publication was made possible by NPRP Grant # NPRP 4-1162-1-181 from the Qatar National Research Fund (a member ofQatar Foundation). The statements made herein are solely theresponsibility of the author[s].

References

[1] C.M. Chen, Y.P. Chen, Q.F. Zhang, Enhancing MOEA/D with guided mutationand priority update for multi-objective optimization, in: IEEE Congress onEvolutionary Computation CEC'09, IEEE, 2009, pp. 209–216.

[2] J. Cheng, Q.S. Liu, H.Q. Lu, Y.W. Chen, Supervised kernel locality preservingprojections for face recognition, Neurocomputing 67 (8) (2005) 443–449.

[3] D. Corne, J. Knowles, M. Oates, The pareto envelope-based selection algorithmfor multiobjective optimization, in: Parallel Problem Solving from Nature PPSNVI, Springer, 2000, pp. 839–848.

[4] K. Deb, Multi-objective optimization, in: Multi-objective Optimization UsingEvolutionary Algorithms, Springer, 2001.

[5] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjectivegenetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6 (2) (2002) 182–197.

[6] C.M. Fonseca, P.J. Fleming, Multiobjective genetic algorithms, in: IEEE Collo-quium on Genetic Algorithms for Control Systems Engineering, IEEE, 1993.

[7] S. Gao, S.Y. Zeng, B. Xiao, L. Zhang, An orthogonal multi-objective evolutionaryalgorithm with lower-dimensional crossover, in: IEEE Congress on EvolutionaryComputation CEC'09, IEEE, 2009, pp. 1959–1964.

[8] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and MachineLearning, Addison-Wesley, 1989.

[9] D.E. Goldberg, A note on Boltzmann tournament selection for genetic algo-rithms and population-oriented simulated annealing, Complex Syst. 4 (4)(1990) 445–460.

[10] D.E. Goldberg, B. Korb, K. Deb, Messy genetic algorithms: motivation, analysis,and first results, Complex Syst. 3 (5) (1989) 493–530.

[11] D.E. Goldberg, J. Richardson, Genetic algorithms with sharing for multimodalfunction optimization, in: Proceedings of the Second International Conferenceon Genetic Algorithms on Genetic algorithms and their application, L. ErlbaumAssociates Inc., 1987, pp. 41–49.

[12] J.J. Grefenstette, Optimization of control parameters for genetic algorithms,IEEE Trans. Syst. Man Cybern. 16 (1) (1986) 122–128.

[13] P. Hajela, C.Y. Lin, Genetic search strategies in multicriterion optimal design,Struct. Multidiscip. Optim. 4 (2) (1992) 99–107.

[14] J.H. Holland, Adaptation in Natural and Artificial Systems, University ofMichigan Press, 1975.

[15] J. Horn, N. Nafpliotis, D.E. Goldberg, A niched pareto genetic algorithm formultiobjective optimization, in: Proceedings of the First IEEE Conference onEvolutionary Computation, 1994. IEEE World Congress on ComputationalIntelligence, IEEE, 1994, pp. 82–87.

[16] H.L. Huang, S.Z. Zhao, R. Mallipeddi, P.N. Suganthan, Multi-objective optimiza-tion using self-adaptive differential evolution algorithm, in: IEEE Congress onEvolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 190–194.

[17] M.T. Jensen, Reducing the run-time complexity of multiobjective EAs: theNSGA-II and other algorithms, IEEE Trans. Evol. Comput. 7 (5) (2003) 503–515.

[18] D.F. Jones, S.K. Mirrazavi, M. Tamiz, Multi-objective meta-heuristics: an over-view of the current state-of-the-art, Eur. J. Oper. Res. 137 (1) (2002) 1–9.

[19] H. Kitano, Neurogenetic learning: an integrated method of designing andtraining neural networks using genetic algorithms, Physica D: NonlinearPhenom 75 (1–3) (1994) 225–238.

[20] J. Knowles, D. Corne, The pareto archived evolution strategy: a new baselinealgorithm for pareto multiobjective optimisation, in: Proceedings of the 1999Congress on Evolutionary Computation, CEC'99, IEEE, 1999.

[21] J.D. Knowles, D.W. Corne, Approximating the nondominated front using thepareto archived evolution strategy, Evol. Comput. 8 (2) (2000) 149–172.

[22] T.C. Koopmans, S. Reiter, A model of Transportation, Activity Analysis ofProduction And Allocation, John Wiley & Sons Inc., New York, 1951.

[23] S. Kukkonen, J. Lampinen, Performance assessment of generalized differentialevolution with a given set of problems, in: IEEE Congress on EvolutionaryComputation, CEC’07, IEEE, 2007, pp. 3593–3600.

[24] P. Kumar, D. Gospodaric, P. Bauer, Improved genetic algorithm inspired bybiological evolution, Soft Comput. 11 (10) (2007) 923–941.

[25] C.Y. Lin, J.L. Dong, Multi-Objective Optimization: Method and Theory (ChineseEdition), Jilin Education Press, 1992.

[26] H.L. Liu, X.Q. Li, The multiobjective evolutionary algorithm based on deter-mined weight and sub-regional search, in: IEEE Congress on EvolutionaryComputation, CEC'09, IEEE, 2009, pp. 1928–1934.

[27] M.H. Liu, X.F. Zou, Y. Chen, Z.J. Wu, Performance assessment of dmoea-dd withCEC 2009 moea competition test instances, in: IEEE Congress on EvolutionaryComputation, CEC'09, IEEE, 2009, pp. 2913–2918.

[28] Q.S. Liu, H.L. Jin, X.O. Tang, H.Q. Lu, S.D. Ma, A new extension of kernel featureand its application for visual recognition, Neurocomputing 71 (10–12) (2008)1850–1856.

[29] Q.S. Liu, Y.C. Huang, D.N. Metaxas, Hypergraph with sampling for imageretrieval, Pattern Recognit. 10 (44) (2011) 2255–2262.

[30] H. Lu, G.G. Yen, Rank-density-based multiobjective genetic algorithm andbenchmark test function study, IEEE Trans. Evol. Comput. 7 (4) (2003)325–343.

[31] T. Murata, H. Ishibuchi, MOGA Multi-objective genetic algorithms, in: IEEEInternational Conference on Evolutionary Computation, IEEE, 1995, p. 289.

[32] T. Murata, H. Ishibuchi, H. Tanaka, Multi-objective genetic algorithmand its applications to flowshop scheduling, Comput. Ind. Eng. 30 (4) (1996)957–968.

[33] B.Y. Qu, P.N. Suganthan, Multi-objective evolutionary programming withoutnon-donamination sorting is up to twenty times faster, in: IEEE Congress onEvolutionary Computation, CEC'09, IEEE, 2009, pp. 2934–2939.

[34] R. Allmendinger, J. Handl, J. Knowles, Multiobjective optimization: whenobjectives exhibit non-uniform latencies, Eur. J. Oper. Res. (2014).

[35] J.D. Schaffer, Multiple objective optimization with vector evaluated geneticalgorithms, in: Proceedings of the First International Conference on GeneticAlgorithms, L. Erlbaum Associates Inc., 1985, pp. 93–100.

[36] K. Sindhya, S. Ruuska, T. Haanpää, K. Miettinen, A new hybrid mutationoperator for multiobjective optimization with differential evolution, SoftComput. 15 (10) (2011) 2041–2055.

[37] K. Sindhya, A. Sinha, K. Deb, K. Miettinen, Local search based evolutionarymulti-objective optimization algorithm for constrained and unconstrainedproblems, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09),IEEE, 2009, pp. 2919–2926.

[38] N. Srinivas, K. Deb, Muiltiobjective optimization using nondominated sortingin genetic algorithms, Evol. Comput. 2 (3) (1994) 221–248.

[39] S. Tiwari, G. Fadel, P. Koch, K. Deb, Performance assessment of the hybridarchive-based micro genetic algorithm (AMGA) on the CEC'09 test problems,in: IEEE Congress on Evolutionary Computation, CEC'09, IEEE, 2009, pp. 1935–1942.

[40] L.Y. Tseng, C. Chen, Multiple trajectory search for unconstrained/constrainedmulti-objective optimization, in: IEEE Congress on Evolutionary Computation,CEC'09, IEEE, 2009, pp. 1951–1958.

[41] David A Van Veldhuizen, Multiobjective Evolutionary Algorithms: Classifica-tions, Analyses, and New Innovations. Technical Report, DTIC Document, 1999.

[42] D.V. Van, G.B. Lamont, Multiobjective evolutionary algorithms: analyzing thestate-of-the-art, Evol. Comput. 8 (2) (2000) 125–147.

[43] Y.P. Wang, C.Y. Dang, H.C. Li, L.X. Han, J.X. Wei, A clustering multi-objectiveevolutionary algorithm based on orthogonal and uniform design, in: IEEE Congresson Evolutionary Computation, CEC'09, IEEE, 2009, pp. 2927–2933.

[44] G.G. Yen, H. Lu, Dynamic multiobjective evolutionary algorithm: adaptive cell-based rank and density estimation, IEEE Trans. Evol. Comput. 7 (3) (2003)253–274.

[45] A. Zamuda, J. Brest, B. Bošković, V. Žumer, Differential evolution with self-adaptation and local search for constrained multiobjective optimization, in:IEEE Congress on Evolutionary Computation, CEC'09, IEEE, 2009, pp. 195–202.

[46] Q. Zhang, H. Li, MOEA/D: a multiobjective evolutionary algorithm based ondecomposition, IEEE Trans. Evol. Comput. 11 (6) (2007) 712–731.

[47] Q.F. Zhang, W.D. Liu, H. Li, The performance of a new version of MOEA/D onCEC'09 unconstrained mop test instances, in: IEEE Congress on EvolutionaryComputation, CEC'09, IEEE, 2009, pp. 203–208.

[48] Q.F. Zhang, P.N. Suganthan, Final report on CEC'9 MOEA competition, in:Congress on Evolutionary Computation (CEC 2009), 2009.

[49] Q.F. Zhang, A. Zhou, S.Z. Zhao, P.N. Suganthan, W.D. Liu, S. Tiwari, Multi-objective Optimization Test Instances for the CEC 2009 Special Session andCompetition, University of Essex and Nanyang Technological University,Technical Report, CES-487, 2008.

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–14 13

Page 14: A genetic algorithm for unconstrained multi …ise.me.tut.ac.jp/lectures/materials/asie2018/Agenetic...A genetic algorithm for unconstrained multi-objective optimization Qiang Longa,

[50] E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionaryalgorithms: empirical results, Evol. Comput. 8 (2) (2000) 173–195.

[51] E. Zitzler, M. Laumanns, L. Thiele, E. Zitzler, E. Zitzler, L. Thiele, L. Thiele, SPEA2:Improving the Strength Pareto Evolutionary Algorithm, 2001.

[52] E. Zitzler, L. Thiele, Multiobjective evolutionary algorithms: a comparativecase study and the strength pareto approach, IEEE Trans. Evol. Comput. 3 (4)(1999) 257–271.

Q. Long et al. / Swarm and Evolutionary Computation 22 (2015) 1–1414