0cm meta-analysis of metaheuristics: quantifying the ......renata turkes , kenneth so rensen, lars...
TRANSCRIPT
Meta-analysis of metaheuristics:Quantifying the effect of adaptiveness inAdaptive Large Neighborhood SearchRenata Turkes, Kenneth Sorensen, Lars Magnus Hvattum
ENM research seminar, 21 June 2019, Antwerp
Motivation
0
Question
Does paracetamol alleviate pain?
Answer
Yes
1
Question
Does paracetamol alleviate pain?
Answer
Yes
1
Question
Does homeopathy work?
Answer
No
2
Question
Does homeopathy work?
Answer
No
2
Question
Does a variable-size tabu list outperform the one of fixed size?
Answer
Don’t know
3
Question
Does a variable-size tabu list outperform the one of fixed size?
Answer
Don’t know
3
Question
Is a stochastic acceptance criterion
better than a deterministic one?
Answer
No idea
4
Question
Is a stochastic acceptance criterion
better than a deterministic one?
Answer
No idea
4
Lack of knowledge in metaheuristics literature
I We do not look for it: Focus is on development of novel
algorithms and competition (horse race), not on generating
knowledge.
I We do not have the tools/methodology.
5
Lack of knowledge in metaheuristics literature
I We do not look for it
: Focus is on development of novel
algorithms and competition (horse race), not on generating
knowledge.
I We do not have the tools/methodology.
5
Lack of knowledge in metaheuristics literature
I We do not look for it: Focus is on development of novel
algorithms and competition (horse race), not on generating
knowledge.
I We do not have the tools/methodology.
5
Lack of knowledge in metaheuristics literature
I We do not look for it: Focus is on development of novel
algorithms and competition (horse race), not on generating
knowledge.
I We do not have the tools/methodology.
5
A typical paper in metaheuristics literature
Algorithm 1 Algorithm 2
Variable neighborhood search Evolutionary algorithm
Swap + Insert + 2-opt Hybrid intelligent crossover
Reactive tabu list Roulette wheel selection
Ejection chain perturbation Mutation using Or-opt
0.4% from best-known 1.2% from best-known
? ? ?
6
A typical paper in metaheuristics literature
Algorithm 1 Algorithm 2
Variable neighborhood search Evolutionary algorithm
Swap + Insert + 2-opt Hybrid intelligent crossover
Reactive tabu list Roulette wheel selection
Ejection chain perturbation Mutation using Or-opt
0.4% from best-known 1.2% from best-known
? ? ?
6
How to generate knowledge?
STEP 1 Isolate specific effect(s)
STEP 2 Implement the algorithm for one or more problems,
and study the effect.
→ problem- and implementation-dependent
STEP 2’ Meta-analysis of the effect.
→ problem- and implementation-independent
7
How to generate knowledge?
STEP 1 Isolate specific effect(s)
STEP 2 Implement the algorithm for one or more problems,
and study the effect.
→ problem- and implementation-dependent
STEP 2’ Meta-analysis of the effect.
→ problem- and implementation-independent
7
How to generate knowledge?
STEP 1 Isolate specific effect(s)
STEP 2 Implement the algorithm for one or more problems,
and study the effect.
→ problem- and implementation-dependent
STEP 2’ Meta-analysis of the effect.
→ problem- and implementation-independent
7
How to generate knowledge?
STEP 1 Isolate specific effect(s)
STEP 2 Implement the algorithm for one or more problems,
and study the effect.
→ problem- and implementation-dependent
STEP 2’ Meta-analysis of the effect.
→ problem- and implementation-independent
7
How to generate knowledge?
STEP 1 Isolate specific effect(s)
STEP 2 Implement the algorithm for one or more problems,
and study the effect.
→ problem- and implementation-dependent
STEP 2’ Meta-analysis of the effect.
→ problem- and implementation-independent
7
How to generate knowledge?
STEP 1 Isolate specific effect(s)
STEP 2 Implement the algorithm for one or more problems,
and study the effect.
→ problem- and implementation-dependent
STEP 2’ Meta-analysis of the effect.
→ problem- and implementation-independent
7
Meta-analysis
= analysis of analyses
7
Meta-analysis= analysis of analyses
7
Meta-analysis
Definition [MLTA09]
Meta-analysis refers to a systematic review of the literature, wherein
statistical techniques are used to integrate and summarize the results
of included studies.
Key benefits:
I synthesize research
I higher statistical power
I more robust (than any individual study)
I can help to identify patterns among study results
I can help to identify potential reasons for discrepant results
8
Meta-analysis
Definition [MLTA09]
Meta-analysis refers to a systematic review of the literature, wherein
statistical techniques are used to integrate and summarize the results
of included studies.
Key benefits:
I synthesize research
I higher statistical power
I more robust (than any individual study)
I can help to identify patterns among study results
I can help to identify potential reasons for discrepant results
8
Meta-analysis
Definition [MLTA09]
Meta-analysis refers to a systematic review of the literature, wherein
statistical techniques are used to integrate and summarize the results
of included studies.
Key benefits:
I synthesize research
I higher statistical power
I more robust (than any individual study)
I can help to identify patterns among study results
I can help to identify potential reasons for discrepant results
8
Meta-analysis
Definition [MLTA09]
Meta-analysis refers to a systematic review of the literature, wherein
statistical techniques are used to integrate and summarize the results
of included studies.
Key benefits:
I synthesize research
I higher statistical power
I more robust (than any individual study)
I can help to identify patterns among study results
I can help to identify potential reasons for discrepant results
8
Meta-analysis
Definition [MLTA09]
Meta-analysis refers to a systematic review of the literature, wherein
statistical techniques are used to integrate and summarize the results
of included studies.
Key benefits:
I synthesize research
I higher statistical power
I more robust (than any individual study)
I can help to identify patterns among study results
I can help to identify potential reasons for discrepant results
8
Meta-analysis
Definition [MLTA09]
Meta-analysis refers to a systematic review of the literature, wherein
statistical techniques are used to integrate and summarize the results
of included studies.
Key benefits:
I synthesize research
I higher statistical power
I more robust (than any individual study)
I can help to identify patterns among study results
I can help to identify potential reasons for discrepant results
8
Meta-analysis
Definition [MLTA09]
Meta-analysis refers to a systematic review of the literature, wherein
statistical techniques are used to integrate and summarize the results
of included studies.
Key benefits:
I synthesize research
I higher statistical power
I more robust (than any individual study)
I can help to identify patterns among study results
I can help to identify potential reasons for discrepant results
8
An example [GBS+07] of meta-analysis in clinical research
9
Adaptive LargeNeighborhood Search
9
Adaptive Large Neighborhood Search (ALNS)
LNS = destroy
and repair
find an initial solution
destroy a part
of the solution
repair the solution
update (best) solution
if the acceptance
criterion is satisfied
stopping
criterion
stop
yes
no
ALNS = several destroy and repair heuristics, chosen in
adaptive fashion
find an initial solutioninitialize
heuristic weights
select a destroy and
repair heuristic ac-
cording to their weight
destroy a part
of the solution
repair the solution
update (best) solution
if the acceptance
criterion is satisfied
stopping
criterion
update heuristic
weights according
to past performance
stop
yes
no
10
Adaptive Large Neighborhood Search (ALNS)
LNS = destroy
and repair
find an initial solution
destroy a part
of the solution
repair the solution
update (best) solution
if the acceptance
criterion is satisfied
stopping
criterion
stop
yes
no
ALNS = several destroy and repair heuristics, chosen in
adaptive fashion
find an initial solutioninitialize
heuristic weights
select a destroy and
repair heuristic ac-
cording to their weight
destroy a part
of the solution
repair the solution
update (best) solution
if the acceptance
criterion is satisfied
stopping
criterion
update heuristic
weights according
to past performance
stop
yes
no
10
An example [RP06] of ALNS for the Vehicle Routing Problem
Destroy heuristics Repair heuristics
Shaw removal Greedy heuristic
Random removal Regret heuristic
Worst removal
11
An example [RP06] of ALNS for the Vehicle Routing Problem
Destroy heuristics Repair heuristics
Shaw removal Greedy heuristic
Random removal Regret heuristic
Worst removal
11
An example [RP06] of ALNS for the Vehicle Routing Problem
Destroy heuristics Repair heuristics
Shaw removal Greedy heuristic
Random removal Regret heuristic
Worst removal
11
An example [RP06] of ALNS for the Vehicle Routing Problem
Destroy heuristics Repair heuristics
Shaw removal Greedy heuristic
Random removal Regret heuristic
Worst removal
11
An example [RP06] of ALNS for the Vehicle Routing Problem
Destroy heuristics Repair heuristics
Shaw removal Greedy heuristic
Random removal Regret heuristic
Worst removal
11
A in ALNS
Adaptive layer
w s+1h = (1− r)w s
h + rπhθh
I w sh weight of a (destroy or repair) heuristic h in segment s
I δ score added to a heuristic in each iteration it has been called
δ =
δ1, solution is a new global best
δ2, solution is better than the current, not accepted before
δ3, solution is worse than current, accepted, not accepted before
I πh score of a heuristic h accumulated in the last segment
I θh number of times h was used during the last segment
I r reaction factor
!
If r = 0, the weights remain unchanged.
12
A in ALNS
Adaptive layer
w s+1h = (1− r)w s
h + rπhθh
I w sh weight of a (destroy or repair) heuristic h in segment s
I δ score added to a heuristic in each iteration it has been called
δ =
δ1, solution is a new global best
δ2, solution is better than the current, not accepted before
δ3, solution is worse than current, accepted, not accepted before
I πh score of a heuristic h accumulated in the last segment
I θh number of times h was used during the last segment
I r reaction factor
!
If r = 0, the weights remain unchanged.
12
Research question
How much does the heuristic performance improve with the
adaptive layer (compared to fixed heuristic weights)?
I {I1, I2, . . . , IN} set of available (max) problem instances
I x∗r (I ) the best solution for instance I found by ALNS (r 6= 0)
I x∗0 (I ) the best solution found by (¬A)LNS (r = 0)
I f (x∗r (I )), f (x∗0 (I )) average objective function values across a
number of runs
A =1
N
∑I∈{I1,I2,...,IN}
f (x∗r (I ))− f (x∗0 (I ))
f (x∗0 (I ))=?
13
Meta-analysis ofA in ALNS
13
Identification and selection of studies
N=129 records with ”Adaptive Large
Neighborhood Search” in the title iden-
tified through Google Scholar
N=5 records indentified through e-mail
correspondance with reseachers
N=134 records screened
N=105 full-text articles accessed
for eligibility
N=71 articles fit criteria:
I N=3 data in the paper
I N=68 data requested from authors
N=13 data available, included in
meta-analysis
N=29 records excluded:
I N=4 duplicates
I N=1 no ALNS in the title
I N=16 citations only
I N=4 could not find article
I N=4 not in English
N=34 of full-text articles excluded:
I N=20 weight adjustment
mechanism not described
I N=14 no numerical parameter
which can turn off the adaptive
layer
Iden
tifi
cati
onS
cree
nin
gE
ligib
ility
Incl
ud
ed
14
Properties of included studies
Article Problem |D| |R| (σ1, σ2, σ3) r S Stopping criterion
[COCK18] Profitable tour prob-
lem with simultaneous
pickup and delivery ser-
vices
6 4 (33, 9, 0) 0.20 - 900 000 iterations
[DCGR16] Multi-period vehicle
routing problem
9 3 (1, 1, 1, 2) 0.25 40 25 000 iterations
[KC16] Electric vehicle routing
problem with time win-
dows
15 8 (25, 20, 21) 0.25 125 25 000 iterations
[KHS17] Curriculum-based
course timetabling
problem
10 2 (30, 15, 18) 0.16 Instance-dependant
number of iterations
[Man16] Multi depot multi period
vehicle routing problem
with a heterogeneous
fleet
4 1 -0.01 0.5 1 100 iterations, or
10 iterations with-
out improvement
15
Properties of included studies (cont)
Article Problem |D| |R| (σ1, σ2, σ3) r S Stopping criterion
[MLAL17] Rural postman problem
with time windows
9 9 (15, 8, 2) 0.7 10 25 000 iterations
[SRH18] Capacitated vehicle
routing problem
3 1 150 000 iterations
[SRH18] Capacitated minimum
spanning tree problem
2 2 150 000 iterations
[SRH18] Quadratic assignment
problem
1 3 150 000 iterations
[SdC18] Design of electronic cir-
cuits
6 5 (15, 25, 5) 0.66 - 0.01 temperature,
lower bound on
solution value, 100
iterations without
improvement, or
800 iterations
16
Properties of included studies (cont)
Article Problem |D| |R| (σ1, σ2, σ3) r S Stopping criterion
[San19] Cutwidth minimization
problem
6 3 (50, 15, 25) 0.85 15 0.01 temperature,
or 3000 iterations
[TVL+18] Fleet size and mix dial-
a-ride problem with re-
configurable vehicle ca-
pacity
2 2 Instance-
dependent compu-
tation time (16, 40
or 100 minutes)
[TS18] Job shop, Quadratic
assignment, Resource-
constrained project
schedulling, Bin pack-
ing, Travelling sales-
man, Vehicle routing
with time windows,
Cutstock, Graph colour-
ing, Lot sizing, and
Warehouse location
problem
30 36 ∆f∆t 0.05 1 240 seconds
17
Bias
Bias in individual studies:
I number of iterations is a common ALNS stopping criterion
I (¬A)LNS with r = 0 < (¬A)LNS without A-components
I equiprobable (¬A)LNS often the worst non-adaptive variant
I ALNS cannot improve over (¬A)LNS for simple instances
Bias across studies:
I publication bias
I search bias
I selection bias
18
Bias
Bias in individual studies:
I number of iterations is a common ALNS stopping criterion
I (¬A)LNS with r = 0 < (¬A)LNS without A-components
I equiprobable (¬A)LNS often the worst non-adaptive variant
I ALNS cannot improve over (¬A)LNS for simple instances
Bias across studies:
I publication bias
I search bias
I selection bias
18
Bias
Bias in individual studies:
I number of iterations is a common ALNS stopping criterion
I (¬A)LNS with r = 0 < (¬A)LNS without A-components
I equiprobable (¬A)LNS often the worst non-adaptive variant
I ALNS cannot improve over (¬A)LNS for simple instances
Bias across studies:
I publication bias
I search bias
I selection bias
18
Bias
Bias in individual studies:
I number of iterations is a common ALNS stopping criterion
I (¬A)LNS with r = 0 < (¬A)LNS without A-components
I equiprobable (¬A)LNS often the worst non-adaptive variant
I ALNS cannot improve over (¬A)LNS for simple instances
Bias across studies:
I publication bias
I search bias
I selection bias
18
Bias
Bias in individual studies:
I number of iterations is a common ALNS stopping criterion
I (¬A)LNS with r = 0 < (¬A)LNS without A-components
I equiprobable (¬A)LNS often the worst non-adaptive variant
I ALNS cannot improve over (¬A)LNS for simple instances
Bias across studies:
I publication bias
I search bias
I selection bias
18
Bias
Bias in individual studies:
I number of iterations is a common ALNS stopping criterion
I (¬A)LNS with r = 0 < (¬A)LNS without A-components
I equiprobable (¬A)LNS often the worst non-adaptive variant
I ALNS cannot improve over (¬A)LNS for simple instances
Bias across studies:
I publication bias
I search bias
I selection bias
18
Bias
Bias in individual studies:
I number of iterations is a common ALNS stopping criterion
I (¬A)LNS with r = 0 < (¬A)LNS without A-components
I equiprobable (¬A)LNS often the worst non-adaptive variant
I ALNS cannot improve over (¬A)LNS for simple instances
Bias across studies:
I publication bias
I search bias
I selection bias
18
Bias
Bias in individual studies:
I number of iterations is a common ALNS stopping criterion
I (¬A)LNS with r = 0 < (¬A)LNS without A-components
I equiprobable (¬A)LNS often the worst non-adaptive variant
I ALNS cannot improve over (¬A)LNS for simple instances
Bias across studies:
I publication bias
I search bias
I selection bias
18
Bias
Bias in individual studies:
I number of iterations is a common ALNS stopping criterion
I (¬A)LNS with r = 0 < (¬A)LNS without A-components
I equiprobable (¬A)LNS often the worst non-adaptive variant
I ALNS cannot improve over (¬A)LNS for simple instances
Bias across studies:
I publication bias
I search bias
I selection bias
18
Bias
Bias in individual studies:
I number of iterations is a common ALNS stopping criterion
I (¬A)LNS with r = 0 < (¬A)LNS without A-components
I equiprobable (¬A)LNS often the worst non-adaptive variant
I ALNS cannot improve over (¬A)LNS for simple instances
Bias across studies:
I publication bias
I search bias
I selection bias
18
A step-by-step example of meta-analysis (max problem)
Stu
dy
Si
Inst
ance
I ij
Ru
n
f(x∗ 0(Iij
))
f(x∗ r(Iij
))
f(x∗ 0(Iij
))
f(x∗ r(Iij
))
100×
[f(x∗ r(Iij
))−
f(x∗ 0(Iij
))]/
f(x∗ 0(Iij
))
Eff
ect
Ai
Sta
nd
ard
dev
iati
onσi
Nu
mb
erof
inst
ance
sNi
Wit
hin
-stu
dy
vari
ance
Vi
=σi
Ni
Bet
wee
n-s
tud
yva
rian
ceT
2
Wei
ght
Wi
=1
Vi+
T2
Nor
mal
ized
wei
ght
Wi
Wei
ghte
deff
ect
Wi×
Ai
S1
I111 856.0 863.0
855.00 866.50 1.35
0.50 2 0.13
0
7.93 0.972 854.0 870.0
I121 40.0 39.0
39.00 39.25 0.642 38.0 39.5
1.00 0.97
S2
I211 1200.0 1208.0
1200.00 1206.5 0.64
3.26 3 3.55 0.28 0.07
2 1200.0 1205.0
I221 10.0 10.5
10.5 11.1 5.712 11.0 11.7
I231 301.0 299.0
300.5 299.5 -0.332 300.0 300.0
1.97 0.03
A = 1.04 19
Meta-analysis of A in ALNS: results/forest plot
article effect norm weight A, % improvement with the adaptive layer % better % worse
[COCK18] 2.34 0.01378
0 5 10 15 20 25
86.32 7.69
[DCGR16] 0.15 0.15248 95.16 1.61
[KC16] 0.45 0.00484 25.00 8.93
[KHS17] 6.54 0.00031 85.71 4.76
[Man16] 1.31 0.00945 100.00 0.00
[MLAL17] 0.59 0.02270 8.58 2.58
[SRH18] 0.00 0.15697 49.06 42.14
[SRH18] 0.01 0.15800 21.15 21.15
[SRH18] −0.02 0.13638 21.50 18.69
[SdC18] 0.00 0.15873 10.93 10.45
[San19] −0.03 0.15384 15.67 15.90
[TVL+18] −0.50 0.03247 21.43 78.57
[TS18] 15.46 0.00003 75.00 10.00
average 0.07 1.00000 - -
On average, adaptiveness improves the algorithmic performance by 0.07%.
20
Meta-analysis of A in ALNS: results/forest plot
article effect norm weight A, % improvement with the adaptive layer % better % worse
[COCK18] 2.34 0.01378
0 5 10 15 20 25
86.32 7.69
[DCGR16] 0.15 0.15248 95.16 1.61
[KC16] 0.45 0.00484 25.00 8.93
[KHS17] 6.54 0.00031 85.71 4.76
[Man16] 1.31 0.00945 100.00 0.00
[MLAL17] 0.59 0.02270 8.58 2.58
[SRH18] 0.00 0.15697 49.06 42.14
[SRH18] 0.01 0.15800 21.15 21.15
[SRH18] −0.02 0.13638 21.50 18.69
[SdC18] 0.00 0.15873 10.93 10.45
[San19] −0.03 0.15384 15.67 15.90
[TVL+18] −0.50 0.03247 21.43 78.57
[TS18] 15.46 0.00003 75.00 10.00
average 0.07 1.00000 - -
On average, adaptiveness improves the algorithmic performance by 0.07%.
20
Left to do
I Take a closer look into the ALNS in the included studies.
I Sensitivity or sub-group analyses to identify patterns.
I Meta-regression?
21
Left to do
I Take a closer look into the ALNS in the included studies.
I Sensitivity or sub-group analyses to identify patterns.
I Meta-regression?
21
Left to do
I Take a closer look into the ALNS in the included studies.
I Sensitivity or sub-group analyses to identify patterns.
I Meta-regression?
21
Concluding remarks
21
To adapt or not to adapt?
The adaptive layer:
I often does not (significantly) improve
the ALNS performance!
I cannot compensate for poor choice of
heuristics.
I might be useful if some heuristics are
targeting a particular subset of
problem instances.
22
To adapt or not to adapt?
The adaptive layer:
I often does not (significantly) improve
the ALNS performance!
I cannot compensate for poor choice of
heuristics.
I might be useful if some heuristics are
targeting a particular subset of
problem instances.
22
To adapt or not to adapt?
The adaptive layer:
I often does not (significantly) improve
the ALNS performance!
I cannot compensate for poor choice of
heuristics.
I might be useful if some heuristics are
targeting a particular subset of
problem instances.
22
To adapt or not to adapt?
The adaptive layer:
I often does not (significantly) improve
the ALNS performance!
I cannot compensate for poor choice of
heuristics.
I might be useful if some heuristics are
targeting a particular subset of
problem instances.
22
To adapt or not to adapt?
The adaptive layer:
I often does not (significantly) improve
the ALNS performance!
I cannot compensate for poor choice of
heuristics.
I might be useful if some heuristics are
targeting a particular subset of
problem instances.
22
Take-aways
I Focus on understanding and knowledge!
I Meta-analysis is a useful tool to
synthesize a body of research and obtain
more general and robust insights.
23
Take-aways
I Focus on understanding and knowledge!
I Meta-analysis is a useful tool to
synthesize a body of research and obtain
more general and robust insights.
23
Take-aways
I Focus on understanding and knowledge!
I Meta-analysis is a useful tool to
synthesize a body of research and obtain
more general and robust insights.
23
Thank you for the fun 5 years at the 5th floor (and beyond)!
Meta-analysis of metaheuristics:Quantifying the effect of adaptiveness inAdaptive Large Neighborhood SearchRenata Turkes, Kenneth Sorensen, Lars Magnus Hvattum
ENM research seminar, 21 June 2019, Antwerp
References
Hayet Chentli, Rachid Ouafi, and Wahiba Ramdane
Cherif-Khettaf, A selective adaptive large neighborhood search
heuristic for the profitable tour problem with simultaneous
pickup and delivery services, RAIRO-Operations Research 52
(2018), no. 4, 1295–1328.
Iman Dayarian, Teodor Gabriel Crainic, Michel Gendreau, and
Walter Rei, An adaptive large-neighborhood search heuristic
for a multi-period vehicle routing problem, Transportation
Research Part E: Logistics and Transportation Review 95
(2016), 95–123.
References
Val Gebski, Bryan Burmeister, B Mark Smithers, Kerwyn Foo,
John Zalcberg, John Simes, Australasian
Gastro-Intestinal Trials Group, et al., Survival benefits from
neoadjuvant chemoradiotherapy or chemotherapy in
oesophageal carcinoma: a meta-analysis, The lancet oncology
8 (2007), no. 3, 226–234.
Merve Keskin and Bulent Catay, Partial recharge strategies for
the electric vehicle routing problem with time windows,
Transportation Research Part C: Emerging Technologies 65
(2016), 111–127.
References
Alexander Kiefer, Richard F Hartl, and Alexander Schnell,
Adaptive large neighborhood search for the curriculum-based
course timetabling problem, Annals of Operations Research
252 (2017), no. 2, 255–282.
Simona Mancini, A real-life multi depot multi period vehicle
routing problem with a heterogeneous fleet: Formulation and
adaptive large neighborhood search based matheuristic,
Transportation Research Part C: Emerging Technologies 70
(2016), 100–112.
References
Marcela Monroy-Licht, Ciro Alberto Amaya, and Andre
Langevin, Adaptive large neighborhood search algorithm for
the rural postman problem with time windows, Networks 70
(2017), no. 1, 44–59.
David Moher, Alessandro Liberati, Jennifer Tetzlaff, and
Douglas G Altman, Preferred reporting items for systematic
reviews and meta-analyses: the prisma statement, Annals of
internal medicine 151 (2009), no. 4, 264–269.
Stefan Ropke and David Pisinger, An adaptive large
neighborhood search heuristic for the pickup and delivery
problem with time windows, Transportation science 40 (2006),
no. 4, 455–472.
References
Vinicius Gandra Martins Santos, Tailored heuristics in adaptive
large neighborhood search applied to the cutwidth
minimization problem, 2019.
Vinicius Gandra Martins Santos and Marco Antonio Moreira
de Carvalho, Adaptive large neighborhood search applied to
the design of electronic circuits, Applied Soft Computing 73
(2018), 14–23.
Alberto Santini, Stefan Ropke, and Lars Magnus Hvattum, A
comparison of acceptance criteria for the adaptive large
neighbourhood search metaheuristic, Journal of Heuristics 24
(2018), no. 5, 783–815.
References
Charles Thomas and Pierre Schaus, Revisiting the self-adaptive
large neighborhood search, International Conference on the
Integration of Constraint Programming, Artificial Intelligence,
and Operations Research, Springer, 2018, pp. 557–566.
Oscar Tellez, Samuel Vercraene, Fabien Lehuede, Olivier
Peton, and Thibaud Monteiro, The fleet size and mix
dial-a-ride problem with reconfigurable vehicle capacity,
Transportation Research Part C: Emerging Technologies 91
(2018), 99–123.