differential evolution with rank-based adaptive strategy ... · differential evolution with...

19
Differential Evolution with Rank-based Adaptive Strategy Selection ´ Alvaro Fialho , Marc Schoenauer, Mich` ele Sebag Orsay, France

Upload: phamdan

Post on 11-Nov-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Differential Evolution with

Rank-based Adaptive Strategy Selection

Alvaro Fialho, Marc Schoenauer, Michele Sebag

Orsay, France

Page 2: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Differential Evolution

Population-based EA

Each individual is used to generate a new offspring

Mutation: weighted differences between several individuals

Crossover: mix parts of the mutated and the original individual

Mutation always applied, Crossover applied with rate (1-CR)

Replacement: 1x1, if offspring better than parent, replace it

User-defined parameters

Population size NP

Mutation scaling factor F

Crossover rate CR

Which mutation strategies to apply?

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 3: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

DE Mutation Strategies

Around a dozen of well-known existent strategies

As in other EAs, complex and problem-dependent choice

Off-line tuning could be used to find the best one

Based on some statistics over several runs for each strategy

Expensive, providing the static single best strategy

Best strategy depends on the region of the search space

Should be continuously adapted, while solving the problem

=⇒ Adaptive Strategy Selection

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 4: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Adaptive Operator/Strategy Selection

Objective

Autonomously select the operator to be applied between theavailable ones, based on its impact on the search up to now.

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 5: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

How to Measure the Impact of an Operator Application?

Very common: Fitness Improvement

Which statistics to use?

Instantaneous value likely to be unstable

Average value over a Window

Extreme value over a Window[Fialho et al., 2008]

But...

Ranges of rewards depend on the problem

Some normalization methods were proposed[Fialho et al., 2009, Gong et al., 2010]

Problem-dependent while fitness values are considered...

Consequently, AOS becomes also problem-dependentAlvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 6: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Rank-based Rewarding

Area Under ROC Curve (AUC)

ML: comparison between 2 binary classifiers

In AOS, 1 operator versus others [Fialho et al., 2010]

Position r is assigned rank-value D r (W − r)

Parameter D is the decay factor, fixed at .5

Size of the segment = assigned rank-value

Example without decay factor:(+ − + + −− [−− +] + −− +)

Comparison-based Rewarding

Ranks of the fitness values, rather than fitness improvements

Invariant with respect to monotonous transformations

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 7: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Op. Selection: A (kind of) Multi-Armed Bandit problem

Multi-Armed Bandits

Several “arms”; at time t, gambler plays arm j

reward at t : rj ,t =

{

1 with some prob,0 otherwise.

Goal: maximize cumulated reward

Upper Confidence Bound (UCB)

Assimptotic optimality guarantees [Auer et al., 2002]

At time t, choose arm j maximizing:

pj ,t+C

2 log∑

k nk,t

nj ,t,where

{

pj ,t ,empirical estimate arm j

nj ,t ,chosen times arm j

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 8: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Op. Selection with Multi-Armed Bandits: the true story

From the original UCB

qj ,t = pj ,t + C

2 logP

k nk,t

nj,tscore

pj,t+1 = (nj,t ∗ pj,t + rj,t)/(nj,t + 1) empirical estimate

nj,t+1 = nj,t + 1 # times used

Dynamics

MAB: reward distribution assumed to be stationary

AOS: depends on the current region of the search space

UCB would take too long to adapt to a new best operator

How to deal with such dynamics?

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 9: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

AUC - Multi-Armed Bandit

AUC-MAB

Rank-based rewarding, no problem-dependency

Comparison-based if ranks over fitness values

Original MAB, p is the avg of all received rewards

Takes too long to adapt to changes

AUC is already a continuously up-to-date aggregation

=⇒ directly use AUC in bandit equation

qj ,t = AUCj ,t + C

2 logP

k nk,t

nj,t

AUC incorporates the behavior of all operators (dynamics)

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 10: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Experimental Settings

Differential Evolution

Population size NP = 10 × DIM

Mutation scaling factor F = 0.5

Crossover rate CR = 1.0, i.e., no crossover

No tuning of these parameters, focus on AOS

Mutation Strategies

1 rand/1: vi = xr1 + F ·(

xr2 − xr3

)

2 rand/2: vi = xr1 + F ·(

xr2 − xr3

)

+ F ·(

xr4 − xr5

)

3 rand-to-best/2:vi = xr1 + F ·

(

xbest − xr1

)

+ F ·(

xr2 − xr3

)

+ F ·(

xr4 − xr5

)

4 current-to-rand/1: vi = xi + F ·(

xr1 − xi

)

+ F ·(

xr2 − xr3

)

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 11: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Baseline Methods

Dynamic Multi-Armed Bandit (DMAB) [Da Costa et al., 2008]

Original UCB MAB algorithm

Dynamics: Page-Hinkley change-detection test (threshold γ)

Adaptive Pursuit (AP) [Thierens, 2005]

Winner-take-all strategy to update the operators rates

Best operator with rate pmax , others pmin

DMAB and AP rewarded by Extreme values

Probability Matching (PM-AdapSS-DE [Gong et al., 2010])

Operators rates are proportional to their qualities

Average of normalized fitness improvements

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 12: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Meta-Parameters

Summary

Strategy selection:

PM : minimal probability pmin, learning rate αAP : minimal probability pmin, learning rate α, adaptation rate β

MAB : scaling factor CDMAB : scaling factor C, PH threshold γAUC-B : scaling factor C, decay factor D (fixed at .5)

Credit Assignment:sliding window size W and type (Avg, Extreme, AUC)

Tuning

Tuned off-line by F-RACE [Birattari et al., 2002]

1 racing lap = 1 run over all functionsElimination after each lap, based on Friedman test at 95%

Comparisons of AOS algorithms using best configurations

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 13: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Parwise comparisons of AUC-Bandit with . . .

Base Techniques Uniform & Adaptive Techniques

separ

able

-2 -1 0 1 2log10 of FEvals(A1)/FEvals(A0)

prop

ortio

n

f1-5

DE1: 3/3DE2: 3/3DE3: 3/3DE4: 3/0

-1 0 1log10 of FEvals(A1)/FEvals(A0)

prop

ortio

n

f1-5

unif: 3/3pm: 3/3DMAB: 3/3AP: 3/3

moder

ate

-2 -1 0 1 2log10 of FEvals(A1)/FEvals(A0)

prop

ortio

n

f6-9

DE1: 4/4DE2: 4/3DE3: 4/4DE4: 4/0

-1 0 1log10 of FEvals(A1)/FEvals(A0)

prop

ortio

nf6-9

unif: 4/4pm: 4/4DMAB: 4/4AP: 4/4

ill-co

nditio

ned

-2 -1 0 1 2log10 of FEvals(A1)/FEvals(A0)

prop

ortio

n

f10-14

DE1: 5/5DE2: 5/5DE3: 5/5DE4: 5/0

-2 -1 0 1 2log10 of FEvals(A1)/FEvals(A0)

prop

ortio

n

f10-14

unif: 5/5pm: 5/5DMAB: 5/5AP: 5/5

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 14: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Parwise comparisons of AUC-Bandit with . . .

Base Techniques Uniform & Adaptive Techniques

multi-m

odal

-1 0 1log10 of FEvals(A1)/FEvals(A0)

prop

ortio

n

f15-19

DE1: 2/2DE2: 2/0DE3: 2/2DE4: 2/0

-1 0 1log10 of FEvals(A1)/FEvals(A0)

prop

ortio

n

f15-19

unif: 2/2pm: 2/2DMAB: 2/2AP: 2/2

wea

kst

ruct

.

-2 -1 0 1 2log10 of FEvals(A1)/FEvals(A0)

prop

ortio

n

f20-24

DE1: 1/1DE2: 1/1DE3: 1/1DE4: 1/0

-1 0 1log10 of FEvals(A1)/FEvals(A0)

prop

ortio

n

f20-24

unif: 1/1pm: 1/1DMAB: 1/1AP: 1/1

all

fcts

.

-2 -1 0 1 2log10 of FEvals(A1)/FEvals(A0)

prop

ortio

n

f1-24

DE1: 15/15DE2: 15/12DE3: 15/15DE4: 15/0

-2 -1 0 1 2log10 of FEvals(A1)/FEvals(A0)

prop

ortio

n

f1-24

unif: 15/15pm: 15/15DMAB: 15/15AP: 15/15

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 15: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Summary

MAB Multi-Armed Bandit

provides guarantees for optimal EvE in a static setting

DMAB MAB + Page-Hinkley change-detection test

Very strong . . . if PH parameter γ is well-tuned

C and γ extremely problem-dependent (reward = fitnessimprovement).

AUC-MAB MAB with p =Area Under the Curve

AUC rank-based: much more robust w.r.t. C

One other parameter: window size W (decay factor D ≡ 0.5)

C .5D.5W 50 best configuration on very different situations

AUC-MAB comparison-based with F instead of ∆F

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 16: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

Discussion and Perspectives

Discussion

Fixed number of hyper-parameters, while user-definedparameters grow w.r.t. number of used operators

Operator type, application rate, and underlying parameters.

In real problems, optimal behavior is not known

X-MAB better than fixed and known adaptive approaches

Assess the AOS techniques, rather to “compete” on BBOB

Better tuning and enhanced versions of DE could be used

Further Work

Further assessment

Use within other meta/hyper-heuristics (GA, DE, ??)SAT, real-world problems, ...

Real-world problems are often multi-modal

Diversity should also be considered for the reward

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 17: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

References I

Auer, P., Cesa-Bianchi, N., and Fischer, P. (2002).

Finite-time analysis of the multiarmed bandit problem.Machine Learning, 47(2/3):235–256.

Birattari, M., Stutzle, T., Paquete, L., and Varrentrapp, K. (2002).

A racing algorithm for configuring metaheuristics.In Proc. GECCO’02.

Da Costa, L., Fialho, A., Schoenauer, M., and Sebag, M. (2008).

Adaptive operator selection with dynamic multi-armed bandits.In M. Keijzer et al., editor, Proc. GECCO’08, pages 913–920. ACM Press.

Fialho, A., Da Costa, L., Schoenauer, M., and Sebag, M. (2008).

Extreme value based adaptive operator selection.In Proc. PPSN’08.

Fialho, A., Schoenauer, M., and Sebag, M. (2009).

Analysis of adaptive operator selection techniques on the royal road and long k-path problems.In G. Raidl et al., editor, Proc. Genetic and Evolutionary Computation Conference, pages 779–786. ACM.

Fialho, A., Schoenauer, M., and Sebag, M. (2010).

Toward comparison-based adaptive operator selection.In J. Branke et al., editor, Proc. GECCO. ACM Press.

Gong, W., Fialho, A., and Cai, Z. (2010).

Adaptive strategy selection in differential evolution.In J. Branke et al., editor, Proc. GECCO. ACM Press.

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 18: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Context Credit Assignment Strategy Selection Experiments Conclusion

References II

Maturana, J., Fialho, A., Saubion, F., Schoenauer, M., and Sebag, M. (2009).

Extreme compass and dynamic multi-armed bandits for adaptive operator selection.In Proc. CEC’09.

Thierens, D. (2005).

An adaptive pursuit strategy for allocating operator probabilities.In Beyer, H.-G., editor, Proc. GECCO’05, pages 1539–1546. ACM Press.

Alvaro Fialho, Marc Schoenauer, Michele Sebag (MSR-INRIA) DE with Rank-based Adaptive Strategy Selection

Page 19: Differential Evolution with Rank-based Adaptive Strategy ... · Differential Evolution with Rank-based Adaptive Strategy Selection Alvaro Fialho´ , Marc Schoenauer, Mich`ele Sebag

Differential Evolution with

Rank-based Adaptive Strategy Selection

Alvaro Fialho, Marc Schoenauer, Michele Sebag

Orsay, France