a benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct...

15
Complex Intell. Syst. (2017) 3:67–81 DOI 10.1007/s40747-017-0039-7 ORIGINAL ARTICLE A benchmark test suite for evolutionary many-objective optimization Ran Cheng 1 · Miqing Li 1 · Ye Tian 2 · Xingyi Zhang 2 · Shengxiang Yang 3 · Yaochu Jin 4 · Xin Yao 1,5 Received: 22 February 2017 / Accepted: 28 February 2017 / Published online: 23 March 2017 © The Author(s) 2017. This article is an open access publication Abstract In the real world, it is not uncommon to face an optimization problem with more than three objectives. Such problems, called many-objective optimization prob- lems (MaOPs), pose great challenges to the area of evo- lutionary computation. The failure of conventional Pareto- based multi-objective evolutionary algorithms in dealing with MaOPs motivates various new approaches. However, in contrast to the rapid development of algorithm design, per- formance investigation and comparison of algorithms have B Ran Cheng [email protected] Miqing Li [email protected] Ye Tian [email protected] Xingyi Zhang [email protected] Shengxiang Yang [email protected] Yaochu Jin [email protected] Xin Yao [email protected] 1 CERCIA, School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK 2 School of Computer Science and Technology, Anhui University, Hefei 230039, China 3 School of Computer Science and Informatics, De Monfort University, Leicester LE1 9BH, UK 4 Department of Computer Science, University of Surrey, Guildford, Surrey Gu2 7XH, UK 5 Department of Computer Science and Engineering, Southern University of Science and Technology, 518055 Shenzhen, China received little attention. Several test problem suites which were designed for multi-objective optimization have still been dominantly used in many-objective optimization. In this paper, we carefully select (or modify) 15 test problems with diverse properties to construct a benchmark test suite, aim- ing to promote the research of evolutionary many-objective optimization (EMaO) via suggesting a set of test problems with a good representation of various real-world scenarios. Also, an open-source software platform with a user-friendly GUI is provided to facilitate the experimental execution and data observation. Keywords Many-objective optimization · Benchmark test suite · Test functions · Software platform Introduction The field of evolutionary multi-objective optimization has developed rapidly over the last two decades, but the design of effective algorithms for addressing problems with more than three objectives (called many-objective optimization problems, MaOPs) remains a great challenge. First, the inef- fectiveness of the Pareto dominance relation, which is the most important criterion in multi-objective optimization, results in the underperformance of traditional Pareto-based algorithms. Also, the aggravation of the conflict between convergence and diversity, along with increasing time or space requirement as well as parameter sensitivity, has become key barriers to the design of effective many-objective optimization algorithms. Furthermore, the infeasibility of solutions’ direct observation can lead to serious difficulties in algorithms’ performance investigation and comparison. All of these suggest the pressing need of new methodologies designed for dealing with MaOPs, new performance metrics 123

Upload: others

Post on 26-Dec-2019

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

Complex Intell. Syst. (2017) 3:67–81DOI 10.1007/s40747-017-0039-7

ORIGINAL ARTICLE

A benchmark test suite for evolutionary many-objectiveoptimization

Ran Cheng1 · Miqing Li1 · Ye Tian2 · Xingyi Zhang2 · Shengxiang Yang3 ·Yaochu Jin4 · Xin Yao1,5

Received: 22 February 2017 / Accepted: 28 February 2017 / Published online: 23 March 2017© The Author(s) 2017. This article is an open access publication

Abstract In the real world, it is not uncommon to facean optimization problem with more than three objectives.Such problems, called many-objective optimization prob-lems (MaOPs), pose great challenges to the area of evo-lutionary computation. The failure of conventional Pareto-based multi-objective evolutionary algorithms in dealingwith MaOPs motivates various new approaches. However, incontrast to the rapid development of algorithm design, per-formance investigation and comparison of algorithms have

B Ran [email protected]

Miqing [email protected]

Ye [email protected]

Xingyi [email protected]

Shengxiang [email protected]

Yaochu [email protected]

Xin [email protected]

1 CERCIA, School of Computer Science, University ofBirmingham, Edgbaston, Birmingham B15 2TT, UK

2 School of Computer Science and Technology, AnhuiUniversity, Hefei 230039, China

3 School of Computer Science and Informatics, De MonfortUniversity, Leicester LE1 9BH, UK

4 Department of Computer Science, University of Surrey,Guildford, Surrey Gu2 7XH, UK

5 Department of Computer Science and Engineering, SouthernUniversity of Science and Technology, 518055 Shenzhen,China

received little attention. Several test problem suites whichwere designed for multi-objective optimization have stillbeen dominantly used inmany-objective optimization. In thispaper, we carefully select (or modify) 15 test problems withdiverse properties to construct a benchmark test suite, aim-ing to promote the research of evolutionary many-objectiveoptimization (EMaO) via suggesting a set of test problemswith a good representation of various real-world scenarios.Also, an open-source software platform with a user-friendlyGUI is provided to facilitate the experimental execution anddata observation.

Keywords Many-objective optimization · Benchmark testsuite · Test functions · Software platform

Introduction

The field of evolutionary multi-objective optimization hasdeveloped rapidly over the last two decades, but the designof effective algorithms for addressing problems with morethan three objectives (called many-objective optimizationproblems, MaOPs) remains a great challenge. First, the inef-fectiveness of the Pareto dominance relation, which is themost important criterion in multi-objective optimization,results in the underperformance of traditional Pareto-basedalgorithms. Also, the aggravation of the conflict betweenconvergence and diversity, along with increasing time orspace requirement as well as parameter sensitivity, hasbecome key barriers to the design of effectivemany-objectiveoptimization algorithms. Furthermore, the infeasibility ofsolutions’ direct observation can lead to serious difficultiesin algorithms’ performance investigation and comparison.All of these suggest the pressing need of new methodologiesdesigned for dealing with MaOPs, new performance metrics

123

Page 2: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

68 Complex Intell. Syst. (2017) 3:67–81

and benchmark functions tailored for experimental and com-parative studies of evolutionarymany-objective optimization(EMaO) algorithms.

In recent years, a number of newalgorithmshave been pro-posed for dealingwithMaOPs [1], including the convergenceenhancement based algorithms such as the grid-dominance-based evolutionary algorithm (GrEA) [2], the knee point-driven evolutionary algorithm (KnEA) [3], the two-archivealgorithm (Two_Arch2) [4]; the decomposition-based algo-rithms such as the NSGA-III [5], and the evolutionaryalgorithms based on both dominance and decomposition(MOEA/DD) [6], and the reference vector-guided evolution-ary algorithm (RVEA) [7]; the performance indicator-basedalgorithms such as the fast hypervolume-based evolution-ary algorithm (HypE) [8]. In spite of the various algorithmsproposed for dealing with MaOPs, the literature still lacks abenchmark test suite for evolutionary many-objective opti-mization.

Benchmark functions play an important role in under-standing the strengths and weaknesses of evolutionary algo-rithms. In many-objective optimization, several scalablecontinuous benchmark function suites, such as DTLZ [9] andWFG [10], have been commonly used. Recently, researchershave also designed/presented some problem suites speciallyfor many-objective optimization [11–16]. However, all ofthese problem suites only represent one or several aspectsof real-world scenarios. A set of benchmark functions withdiverse properties for a systematic study of EMaOalgorithmsare not available in the area. On the other hand, existing

benchmark functions typically have a “regular” Pareto front,overemphasize one specific property in a problem suite, orhave some properties that appear rarely in real-world prob-lems [17]. For example, the Pareto front of most of the DTLZand WFG functions is similar to a simplex. This may bepreferred by decomposition-based algorithms which oftenuse a set of uniformly distributed weight vectors in a sim-plex to guide the search [7,18]. This simplex-like shapeof Pareto front also causes an unusual property that anysubset of all objectives of the problem can reach optimal-ity [17,19]. This property can be very problematic in thecontext of objective reduction, since the Pareto front degen-erates into only one point when omitting one objective [19].Also for the DTLZ and WFG functions, there is no func-tion having a convex Pareto front; however, a convex Paretofront may bring more difficulty (than a concave Pareto front)for decomposition-based algorithms in terms of solutions’uniformity maintenance [20]. In addition, the DTLZ andWFG functions which are used as MaOPs with a degen-erate Pareto front (i.e., DTLZ5, DTLZ6 and WFG3) havea nondegenerate part of the Pareto front when the numberof objectives is larger than four [10,21,22]. This naturallyaffects the performance investigation of evolutionary algo-rithms on degenerate MaOPs.

This paper carefully selects/designs 15 test problems toconstruct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark problems arewith diverse properties which cover a good representationof various real-world scenarios, such as being multimodal,

Table 1 Main properties of the15 test functions

Problem Properties Note

MaF1 Linear No single optimal solution in any subset of objectives

MaF2 Concave No single optimal solution in any subset of objectives

MaF3 Convex, multimodal

MaF4 Concave, multimodal Badly scaled and no single optimal solution in anysubset of objectives

MaF5 Convex, biased Badly scaled

MaF6 Concave, degenerate

MaF7 Mixed, disconnected, Multimodal

MaF8 Linear, degenerate

MaF9 Linear, degenerate Pareto optimal solutions are similar to their image inthe objective space

MaF10 Mixed, biased

MaF11 Convex, disconnected, nonseparable

MaF12 Concave, nonseparable, biased deceptive

MaF13 Concave, unimodal, nonseparable,degenerate

Complex Pareto set

MaF14 Linear, partially separable, large scale Non-uniform correlations between decision variablesand objective functions

MaF15 Convex, partially separable, large scale Non-uniform correlations between decision variablesand objective functions

123

Page 3: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

Complex Intell. Syst. (2017) 3:67–81 69

000

0.2

0.4

3-objective MaF1

f 3

0.6

f1f2

0.50.5

0.8

1

1 1

Fig. 1 The Pareto front of MaF1 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

000

0.2

0.4

3-objective MaF2

f 3

0.6

f2 f1

5.05.0

0.8

1

1 1

Fig. 2 The Pareto front of MaF2 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

disconnected, degenerate, and/or nonseparable, and havingan irregular Pareto front shape, a complex Pareto set ora large number of decision variables (as summarized inTable 1). Our aim is to promote the research of evolu-tionary many-objective optimization via suggesting a set ofbenchmark functions with a good representation of variousreal-world scenarios. Also, an open-source software plat-form with a user-friendly GUI is provided to facilitate theexperimental execution and data observation. In the fol-lowing, Sect. “Function definitions” details the definitionsof the 15 benchmark functions, and Sect. “Experimentalsetup” presents the experimental setup for benchmark stud-ies, including general settings, performance indicators, andsoftware platform.

Function definitions

• D: number of decision variables• M : number of objectives

• x = (x1, x2, . . . , xD): decision vector• fi : i th objective function

MaF1 (modified inverted DTLZ1 [23])

min

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

f1(x) = (1 − x1 . . . xM−1)(1 + g(xM ))

f2(x) = (1 − x1 . . . (1 − xM−1))(1 + g(xM ))

. . .

fM−1(x) = (1 − x1(1 − x2))(1 + g(xM ))

fM (x) = x1(1 + g(xM ))

(1)

with

g(xM ) =|x|∑

i=M

(xi − 0.5)2 (2)

where the number of decision variable is D = M + K −1, and K denotes the size of xM , namely K = |xM |, withxM = (xM , . . . , xD). As shown in Fig. 1, this test problemhas an inverted PF, while the PS is relatively simple. Thistest problem is used to assess whether EMaO algorithms are

123

Page 4: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

70 Complex Intell. Syst. (2017) 3:67–81

00 0

0.2

0.4

f 3

3-objective MaF3

0.6

f2 f1

0.50.5

0.8

1

1 1

Fig. 3 The Pareto front of MaF3 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

capable of dealing with inverted PFs. Parameter settings ofthis test problem are: x ∈ [0, 1]D and K = 10 (Fig. 2).

MaF2 (DTLZ2BZ [19])

min

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

f1(x) = cos(θ1) . . . cos(θ2) cos(θM−1)(1 + g1(xM ))

f2(x) = cos(θ1) . . . cos(θM−2) sin(θM−1)(1+g2(xM ))

. . .

fM−1(x) = cos(θ1) sin(θ2)(1 + gM−1(xM ))

fM (x) = sin(θ1)(1 + gM (xM ))

(3)

with

gi (xM ) =M+i ·� D−M+1

M �−1∑

j=M+(i−1)·� D−M+1M �

((x j2

+1

4

)

− 0.5

)2

for i =1, . . . , M−1

gM (xM ) =n∑

j=M+(i−1)·� D−M+1M �

((x j2

+ 1

4

)

− 0.5

)2

θi = π

2·(xi2

+ 1

4

)

for i = 1, . . . , M − 1 (4)

where the number of decision variable is D = M + K − 1,and K denotes the size of xM , namely K = |xM |, with xM =(xM , . . . , xD). This test problem is modified from DTLZ2to increase the difficulty of convergence. In original DTLZ2,it is very likely that the convergence can be achieved oncethe g(xM ) = 0 is satisfied; by contrast, for this modifiedversion, all the objective have to be optimized simultaneouslyto reach the true PF. Therefore, this test problem is used toassess the whether and MOEA is able to perform concurrentconvergence on different objectives. Parameter settings are:x ∈ [0, 1]D and K = 10.

MaF3 (convex DTLZ3 [5])

min

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

f1(x)=[cos( π

2 x1) . . . cos( π2 xM−2) cos( π

2 xM−1)(1+g(xM ))]4

f2(x)=[cos( π

2 x1) . . . cos( π2 xM−2) sin( π

2 xM−1)(1+g(xM ))]4

. . .

fM−1(x)=[cos( π

2 x1) sin(π2 x2)(1+g(xM ))

]4

fM (x)= [sin( π

2 x1)(1+g(xM ))]2

(5)

with

g(xM )=100

⎣|xM |+|x|∑

i=M

(xi −0.5)2−cos(20π(xi −0.5))

(6)

where the number of decision variable is D = M + K − 1,and K denotes the size of xM , namely K = |xM |, with xM =(xM , . . . , xD). As shown in Fig. 3, this test problem has aconvex PF, and there a large number of local fronts. This testproblem is mainly used to assess whether EMaO algorithmsare capable of dealing with convex PFs. Parameter settingsof this test problem are: x ∈ [0, 1]D , K = 10 (Fig. 4).

MaF4 (inverted badly scaled DTLZ3)

min

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

f1(x) = a × (1− cos

(π2 x1

). . . cos

(π2 xM−2

)cos

(π2 xM−1

))(1+g(xM ))

f2(x) = a2 × (1− cos

(π2 x1

). . . cos

(π2 xM−2

)sin

(π2 xM−1

))(1+g(xM ))

. . .

fM−1(x) = aM−1 × (1 − cos

(π2 x1

)sin

(π2 x2

))(1 + g(xM ))

fM (x) = aM × (1 − sin

(π2 x1

)) × (1 + g(xM ))

(7)

with

g(xM )=100

⎣|xM |+|x|∑

i=M

(xi−0.5)2 − cos(20π(xi−0.5))

(8)

123

Page 5: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

Complex Intell. Syst. (2017) 3:67–81 71

000

2

1 0.5

4

3-objective MaF4

f 3

f2 f1

12

6

8

1.534 2

Fig. 4 The Pareto front of MaF4 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

000

0.5

1 2

1

3-objective MaF5

f 3

f2 f1

42

1.5

2

638 4

Fig. 5 The Pareto front of MaF5 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

where the number of decision variable is D = M + K − 1,and K denotes the size of xM , namely K = |xM |, with xM =(xM , . . . , xD). Parameter settings are a = 2. Besides, thefitness landscape of this test problem is highly multimodal,containing a number of (3k − 1) local Pareto-optimal fronts.This test problem is used to assess whether EMaO algorithmsare capable of dealingwith badly scaled PFs, especiallywhenthefitness landscape is highlymultimodal. Parameter settingsof this test problem are: x ∈ [0, 1]n , K = 10 and a = 2.

MaF5 (convex badly scaled DTLZ4)

min

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

f1(x) = aM× [cos

(π2 x1

α). . . cos

(π2 x

αM−2

)cos

(π2 x

αM−1

)(1+g(xM ))

]4

f2(x) = aM−1× [cos

(π2 x1

α). . . cos

(π2 x

αM−2

)sin

(π2 x

αM−1

)(1+g(xM ))

]4

. . .

fM−1(x) = a2 × [cos( π

2 xα1 ) sin( π

2 xα2 )(1 + g(xM ))

]4

fM (x) = a × [sin( π

2 x1α)(1 + g(xM ))

]4

(9)

with

g(xM ) =|x|∑

i=M

(xi − 0.5)2 (10)

where the number of decision variable is D = M + K − 1,and K denotes the size of xM , namely K = |xM |, withxM = (xM , . . . , xD). As shown in Fig. 5, this test problemhas a badly scaled PF, where each objective function is scaledto a substantially different range. Besides, the PS of this testproblem has a highly biased distribution, where the majorityof Pareto optimal solutions are crowded in a small subregion.This test problem is used to assess whether EMaO algorithmsare capable of dealing with badly scaled PFs/PSs. Parametersettings of this test problem are: x ∈ [0, 1]D , α = 100 anda = 2.

MaF6 (DTLZ5(I,M) [24])

min

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

f1(x) = cos(θ1) . . . cos(θM−2) cos(θM−1)(1 + 100g(xM ))

f2(x) = cos(θ1) . . . cos(θM−2) sin(θM−1)(1 + 100g(xM ))

. . .

fM−1(x) = cos(θ1) sin(θ2)(1 + 100g(xM ))

fM (x) = sin(θ1)(1 + 100g(xM ))

(11)

123

Page 6: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

72 Complex Intell. Syst. (2017) 3:67–81

0

f1

0.50

3-objective MaF6

0

0.2

f2

0.2 0.4 0.6

0.4

10.8

f 3

1

0.6

0.8

1

Fig. 6 The Pareto front of MaF6 with three and tenobjectives shown by Cartesian coordinates and parallel coordinates, respectively

000

2

3-objective MaF7

f 3

4

f2 f1

5.05.0

6

1 1

Fig. 7 The Pareto front of MaF7 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

with

θi ={

π2 xi for i = 1, 2, . . . , I − 1

14(1+g(xM ))

(1 + 2g(xM )xi ) for i= I, . . . , M−1

(12)

g(xM ) =|x|∑

i=M

(xi − 0.5)2 (13)

where the number of decision variable is D = M + K −1, and K denotes the size of xM , namely K = |xM |, withxM = (xM , . . . , xD). As shown in Fig. 6, this test problemhas a degenerate PF whose dimensionality is defined usingparameter I . In other words, the PF of this test problem isalways an I -dimensional manifold regardless of the specificnumber of decision variables. This test problem is used toassess whether EMaO algorithms are capable of dealing withdegenerate PFs. Parameter settings are: x ∈ [0, 1]D , I = 2and K = 10.

MaF7 (DTLZ7 [9])

min

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

f1(x) = x1f2(x) = x2

. . .

fM−1(x) = xM−1

fM (x) = h( f1, f2, . . . , fM−1, g) × (1 + g(xM ))

(14)

with

{g(xM ) = 1 + 9

|xM |∑|x|

i=M xi

h( f1, f2, . . . , fM−1, g)=M−∑M−1i=1

[fi

1+g (1+ sin(3π fi ))]

(15)

where the number of decision variable is D = M + K −1, and K denotes the size of xM , namely K = |xM |, withxM = (xM , . . . , xD). As shown in Fig. 7, this test problemhas a disconnected PF where the number of disconnectedsegments is 2M−1. This test problem is used to assesswhether

123

Page 7: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

Complex Intell. Syst. (2017) 3:67–81 73

-1 -0.5 0 0.5 1x1

-1

-0.5

0

0.5

1

x 2

3-objective MaF8

-1 -0.5 0 0.5 1x1

-1

-0.5

0

0.5

1

x 2

10-objective MaF8

Fig. 8 The Pareto front of MaF8 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

EMaO algorithms are capable of dealing with disconnectedPFs, especiallywhen the number of disconnected segments islarge in high-dimensional objective space. Parameter settingsare: x ∈ [0, 1]n and K = 20.

MaF8 (multi-point distance minimization problem[11,12])

This function considers a two-dimensional decision space.As its name suggests, for any point x = (x1, x2) MaF8 cal-culates the Euclidean distance from x to a set of M targetpoints (A1, A2, . . . , AM ) of a given polygon. The goal ofthe problem is to optimize these M distance values simulta-neously. It can be formulated as

min

⎧⎪⎪⎨

⎪⎪⎩

f1(x) = d(x, A1)

f2(x) = d(x, A2)

. . .

fM (x) = d(x, AM )

(16)

where d(x, Ai ) denotes the Euclidean distance from point xto point Ai .

One important characteristic of MaF8 is its Pareto opti-mal region in the decision space is typically a 2D manifold(regardless of the dimensionality of its objective vectors).This naturally allows a direct observation of the searchbehavior of EMaO algorithms, e.g., the convergence of theirpopulation to the Pareto optimal solutions and the coverageof the population over the optimal region.

In this test suite, the regular polygon is used (to unifywith MaF9). The center coordinates of the regular polygon(i.e., Pareto optimal region) are (0, 0) and the radius of thepolygon (i.e., the distance of the vertexes to the center) is 1.0.Parameter settings are: x ∈ [−10,000, 10,000]2. Figure 8shows the Pareto optimal regions of the three-objective andten-objective MaF8.

MaF9 (multi-line distance minimization problem [25])

This function considers a two-dimensional decision space.For any point x = (x1, x2), MaF9 calculates the Euclideandistance from x to a set of M target straight lines, each ofwhich passes through an edge of the given regular polygonwith M vertexes (A1, A2, . . . , AM ), where M ≥ 3. The goalof MaF9 is to optimize these M distance values simultane-ously. It can be formulated as

min

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

f1(x) = d(x,←−→A1A2)

f2(x) = d(x,←−→A2A3)

. . .

fM (x) = d(x,←−−→AM A1)

(17)

where←−→Ai A j is the target line passing through vertexes Ai

and A j of the regular polygon, and d(x,←−→Ai A j ) denotes the

Euclidean distance from point x to line←−→Ai A j .

One key characteristic ofMaF9 is that the points in the reg-ular polygon (including the boundaries) and their objectiveimages are similar in the sense of Euclidean geometry [25]. Inother words, the ratio of the distance between any two pointsin the polygon to the distance between their correspondingobjective vectors is a constant. This allows a straightforwardunderstanding of the distribution of the objective vector set(e.g., its uniformity and coverage over the Pareto front) viaobserving the solution set in the two-dimensional decisionspace. In addition, for MaF9 with an even number of objec-tives (M = 2k where k ≥ 2), there exist k pairs of paralleltarget lines. Any point (outside the regular polygon) resid-ing between a pair of parallel target lines is dominated byonly a line segment parallel to these two lines. This propertycan pose a great challenge for EMaO algorithms which usePareto dominance as the sole selection criterion in terms of

123

Page 8: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

74 Complex Intell. Syst. (2017) 3:67–81

-1 -0.5 0 0.5 1x1

-1

-0.5

0

0.5

1

x 2

3-objective MaF9

-1 -0.5 0 0.5 1x1

-1

-0.5

0

0.5

1

x 2

10-objective MaF9

Fig. 9 The Pareto front of MaF9 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

convergence, typically leading to their populations trappedbetween these parallel lines [14].

For MaF9, all points inside the polygon are the Paretooptimal solutions. However, these points may not be thesole Pareto optimal solutions of the problem. If two tar-get lines intersect outside the regular polygon, there existsome areas whose points are nondominated with the inte-rior points of the polygon. Apparently, such areas exist inthe problem with five or more objectives in view of the con-vexity of the considered polygon. However, the geometricsimilarity holds only for the points inside the regular poly-gon. The Pareto optimal solutions that are located outsidethe polygon will affect this similarity property. So, we setsome regions infeasible in the search space of the problem.Formally, consider anM-objectiveMaF9with a regular poly-gon of vertexes (A1, A2, . . . , AM ). For any two target lines←−−→Ai−1Ai and

←−−−→An An+1 (without loss of generality, assuming

i < n) that intersect one point (O) outside the consideredregular polygon, we can construct a polygon (denoted as�Ai−1Ai An An+1 ) boundedbya set of 2(n−i)+2 line segments:

Ai A′n, A

′n A

′n−1, . . . , A

′i+1A

′i , A

′i An, An An−1, . . . , Ai+1Ai ,

where points A′i , A

′i+1, . . . , A

′n−1, A

′n are symmetric points

of Ai , Ai+1, . . . An−1, An with respect to central point O .We constrain the search space of the problem outside suchpolygons (but not including the boundary). Now the pointsinside the regular polygon are the sole Pareto optimal solu-tions of the problem. In the implementation of the testproblem, for newly produced individuals which are locatedin the constrained areas of the problem, we simply repro-duce them within the given search space until they arefeasible.

In this test suite, the center coordinates of the regular poly-gon (i.e., Pareto optimal region) are (0, 0) and the radius ofthe polygon (i.e., the distance of the vertexes to the center)is 1.0. Parameter settings are: x ∈ [−10,000, 10,000]2. Fig-

ure 9 shows the Pareto optimal regions of the three-objectiveand ten-objective MaF9.

MaF10 (WFG1 [10])

min

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

f1(x) = yM+2(1− cos

(π2 y1

)). . .

(1− cos

(π2 yM−2

)) (1− cos

(π2 yM−1

))

f2(x) = yM+4(1− cos

(π2 y1

)). . .

(1− cos

(π2 yM−2

)) (1− sin

(π2 yM−1

))

. . .

fM−1(x) = yM + 2(M − 1)(1 − cos

(π2 y1

)) (1 − sin

(π2 y2

))

fM (x) = yM + 2M(1 − y1 − cos(10πy1+π/2)

10π

)

(18)

with

zi = xi2i

for i = 1, . . . , D (19)

t1i =⎧⎨

zi , if i = 1, . . . , K|zi − 0.35|

|�0.35 − zi�| + 0.35, if i = K + 1, . . . , D

(20)

t2i =

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

t1i , if i = 1, . . . , K

0.8 + 0.8(0.75−t1i )min(0,�t1i −0.75�)0.75

− (1−0.8)(t1i −0.85)min(0,�0.85−t1i �)1−0.85 , if i = K + 1, . . . , D

(21)

t3i = t2i0.02

for i = 1, . . . , D (22)

t4i =

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

∑i K/(M−1)j=(i−1)K/(M−1)+1 2 j t

3j

∑i K/(M−1)j=(i−1)K/(M−1)+1 2 j

, if i = 1, . . . , M − 1

∑Dj=K+1 2 j t

3j

∑Dj=K+1 2 j

, if i = M

(23)

123

Page 9: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

Complex Intell. Syst. (2017) 3:67–81 75

00

1

2

0.51

33

3-objective MaF10

4

2 1

12

5

6

1.532 4

f

f f

Fig. 10 The Pareto front of MaF10 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

yi =⎧⎨

(t4i − 0.5)max(1, t4M ) + 0.5, if i = 1, . . . , M−1

t4M , if i = M

(24)

where the number of decision variable is D = K + L , withK denoting the number of position variables and L denotingthe number of distance variables. As shown in Fig. 10, thistest problem has a scaled PF containing both convex andconcave segments. Besides, there are a lot of transformationfunctions correlating the decision variables and the objectivefunctions. This test problem is used to assess whether EMaOalgorithms are capable of dealing with PFs of complicatedmixed geometries. Parameter settings are: x ∈ ∏D

i=1[0, 2i],K = M − 1, and L = 10.

MaF11 (WFG2 [10])

min

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎩

f1(x)=yM+2(1− cos

(π2 y1

)). . .

(1− cos

(π2 yM−2

)) (1− cos

(π2 yM−1

))

f2(x) = yM+4(1− cos

(π2 y1

)). . .

(1− cos

(π2 yM−2

)) (1− sin

(π2 yM−1

))

. . .

fM−1(x) = yM + 2(M − 1)(1 − cos

(π2 y1

)) (1 − sin

(π2 y2

))

fM (x) = yM + 2M(1 − y1 cos2(5πy1))

(25)

with

zi = xi2i

for i = 1, . . . , D (26)

t1i =⎧⎨

zi , if i = 1, . . . , K

|zi−0.35||�0.35−zi �|+0.35 , if i = K + 1, . . . , D

(27)

t2i =

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

t1i , if i = 1, . . . , K

t1K+2(i−K )−1 + t1K+2(i−K )

+2|t1K+2(i−K )−1 − t1K+2(i−K )|, if i = K+1, . . . , (D+K )/2

(28)

t3i =

⎧⎪⎪⎨

⎪⎪⎩

∑i K/(M−1)j=(i−1)K/(M−1)+1 t

2j

K/(M−1) , if i = 1, . . . , M − 1∑(D+K )/2

j=K+1 t2j(D−K )/2 , if i = M

(29)

yi =⎧⎨

(t3i − 0.5)max(1, t3M ) + 0.5, if i=1, . . . , M−1

t3M , if i = M

(30)

where the number of decision variable is n = K + L , withK denoting the number of position variables and L denotingthe number of distance variables. As shown in Fig. 11, thistest problem has a scaled disconnected PF. This test problemis used to assess whether EMaO algorithms are capable ofdealingwith scaled disconnected PFs. Parameter settings are:x ∈ ∏D

i=1[0, 2i], K = M − 1, and L = 10.

MaF12 (WFG9 [10])

min

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

f1(x) = yM + 2 sin(π2 y1

). . . sin

(π2 yM−2

)sin

(π2 yM−1

)

f2(x) = yM + 4 sin(π2 y1

). . . sin

(π2 yM−2

)cos

(π2 yM−1

)

. . .

fM−1(x) = yM + 2(M − 1) sin(π2 y1

)cos

(π2 y2

)

fM (x) = yM + 2M cos(π2 y1

)

(31)

with

zi = xi2i

for i = 1, . . . , D (32)

123

Page 10: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

76 Complex Intell. Syst. (2017) 3:67–81

00 0

2

0.51

3-objective MaF11

f 3

4

f1f2

12

6

3 1.52 4

Fig. 11 The Pareto front of MaF11 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

t1i =

⎧⎪⎨

⎪⎩z0.02+(50−0.02)

(

0.98/49.98−(

1−2

∑nj=i+1 z jD−i

)

|�0.5−∑D

j=i+1 z jD−i �+0.98/49.98|

)

i , if i = 1, . . . , D − 1zi , if i = D

(33)

t2i =

⎧⎪⎪⎪⎨

⎪⎪⎪⎩

1 + (|t1i − 0.35| − 0.001)(

349.95�t1i −0.349�0.349 + 649.95�0.351−t1i �

0.649 + 1000

)

, if i = 1, . . . , K

197

(

1 + cos[122π(0.5 − |t1i −0.35|2(�0.35−t1i �+0.35)

)] + 380

(|t1i −0.35|

2(�0.35−t1i �+0.35)

)2)

, if i = K + 1, . . . , D(34)

t3i =

⎧⎪⎪⎨

⎪⎪⎩

∑i K/(M−1)j=(i−1)K/(M−1)+1

(t2j +

∑K/(M−1)−2k=0 |t2j −t2p |

)

K/(M−1)/2(1+2K/(M−1)−2K/(M−1)/2) , if i = 1, . . . , M − 1∑D

j=K+1

(t2j +

∑D−K−2k=0 |t2j −t2q |

)

(D−K )/2(1+2(D−K )−2(D−K )/2) , if i = M

(35)

yi ={

(t3i − 0.5)max(1, t3M )+0.5, if i = 1, . . . , M − 1

t3M , if i = M

(36)

⎧⎪⎨

⎪⎩

p = (i − 1)K/(M − 1) + 1 + ( j − (i − 1)K/

(M − 1) + k)mod(K/(M − 1))

q = K + 1 + ( j − K + k)mod(n − K )

(37)

where the number of decision variable is D = K + L , withK denoting the number of position variable and L denotingthe number of distance variable. As shown in Fig. 12, this testproblem has a scaled concave PF. Although the PF of this testproblem is simple, its decision variables are nonseparablyreduced, and its fitness landscape is highly multimodal. This

test problem is used to assess whether EMaO algorithms arecapable of dealing with scaled concave PFs together withcomplicated fitness landscapes. Parameter settings are: x ∈∏D

i=1[0, 2i], K = M − 1, and L = 10.

MaF13 (PF7 [13])

min

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎩

f1(x) = sin(

π2 x1

) + 2|J1|

j∈J1

y2j

f2(x) = cos(π2 x1) sin

(π2 x2

) + 2|J2|

j∈J2

y2j

f3(x) = cos(

π2 x1

)cos

(π2 x2

) + 2|J3|

j∈J3

y2j

f4,...,M (x)= f1(x)2 + f2(x)10+ f3(x)10+ 2|J4|

j∈J4

y2j

(38)

123

Page 11: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

Complex Intell. Syst. (2017) 3:67–81 77

00 0

2

1 0.5

3-objective MaF12

f 3

4

f1f2

12

6

3 1.54 2

Fig. 12 The Pareto front of MaF12 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

00 0

0.2

0.4

3-objective MaF13

f 3

0.6

f2 f1

5.05.0

0.8

1

1 1

Fig. 13 The Pareto front of MaF13 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

with

yi = xi − 2x2 sin

(

2πx1 + iπ

n

)

for i = 1, . . . , D (39)

⎧⎪⎪⎨

⎪⎪⎩

J1 = { j |3 ≤ j ≤ D, and j mod 3 = 1}J2 = { j |3 ≤ j ≤ D, and j mod 3 = 2}J3 = { j |3 ≤ j ≤ D, and j mod 3 = 0}J4 = { j |4 ≤ j ≤ D}

(40)

where the number of decision variable is D = 5. As shownin Fig. 13, this test problem has a concave PF; in fact, thePF of this problem is always a unit sphere regardless of thenumber of objectives.Although this test problemhas a simplePF, its decision variables are nonlinearly linked with the firstand second decision variables, thus leading to difficulty inconvergence. This test problem is used to assess whetherEMaO algorithms are capable of dealing with degeneratePFs and complicated variable linkages. Parameter setting is:x ∈ [0, 1]2 × [−2, 2]D−2.

MaF14 (LSMOP3 [16])

min

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

f1(x) = x f1 . . . x f

M−1

(1 + ∑M

j=1 c1, j × g1(xsj ))

f2(x) = x f1 . . . (1 − x f

M−1)(1+∑M

j=1 c2, j × g2(xsj ))

. . .

fM−1(x)= x f1 (1−x f

2 )(1+∑M

j=1 cM−1, j × gM−1(xsj ))

fM (x) = (1−x f1 )

(1+ ∑M

j=1 cM, j×gM (xsj ))

x ∈ [0, 10]|x|(41)

with

ci, j ={1, if i = j

0, otherwise(42)

123

Page 12: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

78 Complex Intell. Syst. (2017) 3:67–81

000

0.2

0.4

f 3

3-objective MaF14

0.6

f2 f1

5.05.0

0.8

1

1 1

Fig. 14 The Pareto front of MaF14 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

000

0.2

0.4

f 3

3-objective MaF15

0.6

f1f2

5.05.0

0.8

1

1 1

Fig. 15 The Pareto front of MaF15 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

g2k−1(xsi ) = 1Nk

∑Nkj=1

η1(xsi, j )|xsi, j |

g2k(xsi ) = 1Nk

∑Nkj=1

η2(xsi, j )|xsi, j |

k = 1, . . . ,⌈M

2

(43)

{η1(x) = ∑|x|

i=1(x2i − 10 cos(2πxi ) + 10)

η2(x) = ∑|x|−1i=1

[100(x2i − xi+1)

2 + (xi − 1)2] (44)

⎧⎨

xs ←(1 + i

|xs |)

× (xsi − li ) − x f1 × (ui − li )

i = 1, . . . , |xs |(45)

where Nk denotes the number of variable subcomponentin each variable group xsi with i = 1, . . . , M , and ui andli are the upper and lower boundaries of the i th decisionvariable in xs . Although this test problem has a simplelinear PF, its fitness landscape is complicated. First, the deci-sion variables are non-uniformly correlated with differentobjectives; second, the decision variables have mixed sep-arability, i.e., some of them are separable while others arenot. This test problem is mainly used to assess whetherEMaO algorithms are capable of dealing with complicatedfitness landscape with mixed variable separability, especiallyin large-scale cases. Parameter settings are: Nk = 2 andD = 20 × M .

123

Page 13: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

Complex Intell. Syst. (2017) 3:67–81 79

MaF15 (inverted LSMOP8 [16])

min

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

f1(x) =(1 − cos

(π2 x

f1

). . . cos

(π2 x

fM−2

)cos

(π2 x

fM−1

))×

(1 + ∑M

j=1 c1, j × g1(xsj ))

f2(x) =(1 − cos

(π2 x

f1

). . . cos

(π2 x

fM−2

)sin

(π2 x

fM−1

))×

(1 + ∑M

j=1 c2, j × g2(xsj ))

. . .

fM−1(x) =(1 − cos

(π2 x

f1

)sin

(π2 x

f2

))×

(1 + ∑M

j=1 cM−1, j × gM−1(xsj ))

fM (x) =(1 − sin

(π2 x

f1

))×

(1 + ∑M

j=1 cM, j gM (xsj ))

x ∈ [0, 1]|x|

(46)

with

ci, j ={1, if j = i or j = i + 1

0, otherwise(47)

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

g2k−1(xsi ) = 1Nk

∑Nkj=1

η1(xsi, j )|xsi, j |

g2k(xsi ) = 1Nk

∑Nkj=1

η2(xsi, j )|xsi, j |

k = 1, . . . ,⌈M

2

(48)

⎧⎪⎨

⎪⎩

η1(x) = ∑|x|i=1

x2i4000 −

|x|∏

i=1cos

(xi√i

)+ 1

η2(x) = ∑|x|i=1(xi )

2.

(49)

{xs ←

(1+ cos

(0.5π i

|xs |))

× (xsi − li )−x f1 × (ui−li )

i = 1, . . . , |xs |(50)

where Nk denotes the number of variable subcomponent ineach variable group xsi with i = 1, . . . , M , and ui and li arethe upper and lower boundaries of the i th decision variablein xs . Although this test problem has a simple convex PF,its fitness landscape is complicated. First, the decision vari-ables are non-uniformly correlated with different objectives;second, the decision variables have mixed separability, i.e.,some of them are separable while others are not. DifferentfromMaF14, this test problem has non-linear (instead of lin-ear) variable linkages on the PS, which further increases thedifficulty. This test problem is mainly used to assess whetherEMaO algorithms are capable of dealing with complicatedfitness landscape with mixed variable separability, especiallyin large-scale cases. Parameter settings are: Nk = 2 andD = 20 × M in Figs. 14 and 15.

Experimental setup

To conduct benchmark experiments using the proposed testsuite, users may follow the experimental setup as givenbelow.

General settings

• Number of objectives (M) 5, 10, 15• Maximum population size1 25 × M• Maximum number of fitness evaluations (FEs)2

max{100000, 10000 × D}• Number of independent runs 31

Performance metrics

• Inverted generational distance (IGD) Let P∗ be a setof uniformly distributed points on the Pareto front. LetP be an approximation to the Pareto front. The invertedgenerational distance between P∗ and P can be definedas:

IGD(P∗, P) =∑

v∈P∗ d(v, P)

|P∗| , (51)

where d(v, P) is the minimum Euclidean distance frompoint v to set P . The IGD metric is able to measure bothdiversity and convergence of P if |P∗| is large enough,and a smaller IGD value indicates a better performance.In this test suite, we suggest a number of 10,000 uni-formly distributed reference points sampled on the truePareto front3 for each test instance.

• Hypervolume (HV) Let y∗ = (y∗1 , . . . , y

∗m) be a refer-

ence point in the objective space that is dominated byall Pareto optimal solutions. Let P be the approximationto the Pareto front. The HV value of P (with regard toy∗) is the volume of the region which is dominated by P

1 The size of final population/archive must be smaller the given max-imum population size, otherwise, a compulsory truncation will beoperated in final statistics for fair comparisons.2 Regardless of the number of objectives, every evaluation of the wholeobjective set is counted as one FE.3 The specific number of reference points for IGD calculations can varya bit due to the different geometries of the Pareto fronts. All referencepoint sets can be automatically generated using the software platformintroduced in Sect. “Software platform”.

123

Page 14: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

80 Complex Intell. Syst. (2017) 3:67–81

Fig. 16 The GUI in PlatEMO for this test suite

and dominates y∗. In this test suite, the objective vectorsin P are normalized using f j

i = f ji1.1×ynadiri

, where f ji is

the i th dimension of j th objective vector, and ynadiri isthe i th dimension of nadir point of the true Pareto front.4

Then we use y* = (1,…,1) as the reference point for thenormalized objective vectors in the HV calculation.

Software platform

All the benchmark functions have been implemented inMATLAB code and embedded in a recently developed soft-ware platform—PlatEMO.5 PlatEMO is an open sourceMATLAB-based platform for evolutionary multi- and many-objective optimization, which currently includes more than50 representative algorithms and more than 100 benchmarkfunctions, along with a variety of widely used performanceindicators. Moreover, PlatEMO provides a user-friendlygraphical user interface (GUI), which enables users to easilyperform experimental settings and algorithmic configura-tions, and obtain statistical experimental results by one-clickoperation.

4 The nadir points can be automatically generated using the softwareplatform introduced in Sect. “Software platform”.5 PlatEMO can be downloaded at http://bimk.ahu.edu.cn/index.php?s=/Index/Software/index.html.

In particular, as shown in Fig. 16, we have tailored a newGUI in PlatEMO for this test suite, such that participantsare able to directly obtain tables and figures comprising thestatistical experimental results for the test suite. To conductthe experiments, the only thing to be done by participants isto write the candidate algorithms in MATLAB and embedthem into PlatEMO. The detailed introduction to PlatEMOregarding how to embed new algorithms can be referred tothe users manual attached in the source code of PlatEMO[26]. Once a new algorithm is embedded in PlatEMO, theuser will be able to select the new algorithm and execute iton the GUI shown in Fig. 16. Then the statistical results willbe displayed in the figures and tables on the GUI, and thecorresponding experimental result (i.e., final population andits performance indicator values) of each run will be savedto a .mat file.

Acknowledgements This work was supported in part by the NationalNatural Science Foundation of China under Grant 61590922, the Engi-neering and Physical Sciences Research Council of U.K. under GrantEP/M017869/1, Grant EP/K001310/1 and Grant EP/K001523/1.

Open Access This article is distributed under the terms of the CreativeCommons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution,and reproduction in any medium, provided you give appropriate creditto the original author(s) and the source, provide a link to the CreativeCommons license, and indicate if changes were made.

123

Page 15: A benchmark test suite for evolutionary many-objective optimization … · 2017-08-25 · construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark

Complex Intell. Syst. (2017) 3:67–81 81

References

1. Li B, Li J, Tang K, Yao X (2015) Many-objective evolutionaryalgorithms: a survey. ACM Comput Surv 48(1):13

2. Yang S, Li M, Liu X, Zheng J (2013) A grid-based evolutionaryalgorithm formany-objective optimization. IEEETrans Evol Com-put 17(5):721–736

3. Zhang X, Tian Y, Jin Y (2015) A knee point driven evolution-ary algorithm for many-objective optimization. IEEE Trans EvolComput 19(6):761–776

4. Wang H, Jiao L, Yao X (2015) Two_arch2: an improved two-archive algorithm for many-objective optimization. IEEE TransEvol Comput 19(4):524–541

5. Deb K, Jain H (2014) An evolutionary many-objective optimiza-tion algorithm using reference-point-based nondominated sortingapproach, part I: solvingproblemswith box constraints. IEEETransEvol Comput 18(4):577–601

6. Li K, Zhang Q, Kwong S (2015) An evolutionary many-objectiveoptimization algorithm based on dominance and decomposition.IEEE Trans Evol Comput 19(5):694–716

7. Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) A reference vectorguided evolutionary algorithm for many-objective optimization.IEEE Trans Evol Comput 20(5):773–791

8. Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolume-based many-objective optimization. Evol Comput 19(1):45–76

9. Deb K, Thiele L, LaumannsM, Zitzler E (2005) Scalable test prob-lems for evolutionary multiobjective optimization. In: AbrahamA,Jain L, Goldberg R (eds) Evolutionary multiobjective optimiza-tion. Theoretical advances and applications, Springer, Berlin, pp105–145

10. Huband S, Hingston P, Barone L, While L (2006) A review ofmultiobjective test problems and a scalable test problem toolkit.IEEE Trans Evol Comput 10(5):477–506

11. Köppen M, Yoshida K (2007) Substitute distance assignments inNSGA-II for handling many-objective optimization problems. In:Evolutionary multi-criterion optimization (EMO), pp 727–741

12. Ishibuchi H, Hitotsuyanagi Y, Tsukamoto N, Nojima Y (2010)Many-objective test problems to visually examine the behaviorof multiobjective evolution in a decision space. In: InternationalConference on Parallel Problem Solving from Nature (PPSN), pp91–100

13. SaxenaD, ZhangQ, Duro J, Tiwari A (2011) Framework for many-objectivet test problems with both simple and complicated Pareto-set shapes. In: Evolutionary multi-criterion optimization (EMO),pp 197–211

14. Li M, Yang S, Liu X (2014) A test problem for visual investigationof high-dimensional multi-objective search. In: IEEE Congress onEvolutionary Computation (CEC), pp 2140–2147

15. Cheung Y-M, Gu F, Liu H-L (2016) Objective extraction for many-objective optimization problems: Algorithm and test problems.IEEE Trans Evol Comput 20(5):755–772

16. Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) Test problems forlarge-scale multiobjective and many-objective optimization. IEEETrans Cybern (in press)

17. Masuda H, Nojima Y, Ishibuchi H (2016) Common properties ofscalablemultiobjective problems and a new framework of test prob-lems. In: IEEE Congress on Evolutionary Computation (CEC).IEEE, pp 3011–3018

18. Cheng R, Jin Y, Narukawa K (2015) Adaptive reference vectorgeneration for inverse model based evolutionary multiobjectiveoptimization with degenerate and disconnected pareto fronts.In: Proceedings of the International Conference on EvolutionaryMulti-Criterion Optimization. Springer, New York, pp 127–140

19. Brockhoff D, Zitzler E (2009) Objective reduction in evolutionarymultiobjective optimization: theory and applications. Evol Comput17(2):135–166

20. Li M, Yang S, Liu X (2016) Pareto or non-Pareto: Bi-criterion evo-lution in multi-objective optimization. IEEE Trans Evol Comput20(5):645–665

21. Saxena D, Duro J, Tiwari A, Deb K, Zhang Q (2013) Objec-tive reduction in many-objective optimization: linear and nonlinearalgorithms. IEEE Trans Evol Comput 17(1):77–99

22. Ishibuchi H, Masuda H, Nojima Y (2016) Pareto fronts of many-objective degenerate test problems. IEEE Trans Evol Comput20(5):807–813

23. Jain H, Deb K (2014) An evolutionary many-objective optimiza-tion algorithm using reference-point based nondominated sortingapproach, part II: handling constraints and extending to an adaptiveapproach. IEEE Trans Evol Comput 18(4):602–622

24. Deb K, Saxena DK (2006) Searching for Pareto-optimal solutionsthrough dimensionality reduction for certain large-dimensionalmulti-objective optimization problems. In: IEEECongress on Evo-lutionary Computation (CEC), pp 3353–3360

25. Li M, Grosan C, Yang S, Liu X, Yao X (2017) “Multi-line distanceminimization: A visualized many-objective test problem suite.IEEE Trans Evol Comput (in press)

26. TianY,ChengR, ZhangX, JinY (2016) Platemo: amatlab platformfor evolutionary multi-objective optimization. IEEE Comput IntellMag (under review)

123