motivation, basic concepts, basic methods, travelling...
TRANSCRIPT
1
Motivation, Basic Concepts, Basic Methods,
Travelling Salesman Problem, Algorithms
2
What is Combinatorial Optimization?
• Combinatorial Optimization deals mainly with problems where we have to choose an optimal solution from a finite (or sometimes countable) number of possibilities.
• Definition:– Finite set – A subset – A cost function : the cost function assigns a cost to each
element of the set.
– Find a set such that the total cost is minimized (maximized).
3
What is Combinatorial Optimization?
• You have a collection of 10,000 objects.
• Each object has a different value: Object 1: $10, Object 2: $5, …, Object n 7$.
• Can you find a subset of objects whose total value is $1,000,000?
4
• Exhaustive Search suffers from a serious problem—as the number of variables increases, the number of combinations to be examined explodes.
• For example, consider a problem with 20 discrete variables which can each take on 25 different values.
• The number of combinations equals:
•
• If we can reduce the search space by 99.99%, we still must searchcombinations!
5
• In contrast, evolutionary algorithms do not suffer from this problem.
• These algorithms are so-named because they mimic natural processes that govern how nature evolves.
• These algorithms do not attempt to examine the entire space.
• Even so, they have been shown to provide good solutions.
6
• The annealing of solids is a phenomenon in nature.
• The term “annealing” refers to the process in which a solid, that has been brought into liquid phase by increasing its temperature, is brought back to a solid phase by slowly reducing the temperature in such a way that all the particles are allowed to arrange themselves in the strongest crystallized state. Such a crystallized state represents the global minimum of solid’s energy function.
• The cooling process has to be slow enough in order to guarantee that the particles will have time to rearrange themselves and find “local-optimum” position for the given temperature.
• Once the particles have reached this local-optimum position, the substance has reached thermal equilibrium. Once thermal equilibrium is reached, the temperature is lowered once again, and the process continues.
• At the completion, atoms assume a nearly globally minimum energy state.
7
• The particles’ probability of range of motion is proportional to the temperature, according to the Boltzmann distribution:
• In other words, at higher temperatures, the particles have a greater range of motion than at lower temperatures.
• The higher temperature at the beginning of the annealing process allows the particles to move a greater range than at lower temperatures.
• As the temperature decreases, the particles range of motion decreases as well.
• The particles’ probability of range of motion is inversely proportional to the change of energy form one temperature state to another - as the change in energy is larger, the probability of movements gets smaller.
8
Dependence on Temperature and Cooling Schedule
• The annealing process consists of first raising the temperature of a solid to a point where its atoms can freely (i.e., randomly) move and then to lower the temperature, forcing the atoms to rearrange themselves into a lower energy state (i.e., a crystallization process).
• The cooling schedule is vital in this process. If the solid is cooled too quickly, or if the initial temperature of the system is too low, it is not able to become a crystal and instead the solid arrives at an amorphous state with higher energy.
• In this case, the system reaches a local minimum (a higher energy state) instead of the global minimum (i.e., the minimal energy state).
• For example, if you let metal cool rapidly, its atoms aren’t given a chance to settle into a tight lattice and are frozen in a random configuration, resulting in brittle metal.
• If we decrease the temperature very slowly, the atoms are given enough time to settle into a strong crystal.
9
Application to Combinatorial Optimization Problems
• The idea of the annealing process may be applied to combinatorial optimization problems.
[1]Metropolis, Nicholas; Rosenbluth, Arianna W.; Rosenbluth, Marshall N.; Teller, Augusta H.; Teller, Edward (1953). "Equation of State Calculations by Fast Computing Machines". The Journal of Chemical Physics 21 (6): 1087. Bibcode:1953JChPh..21.1087M. doi:10.1063/1.1699114.
[2]Kirkpatrick, S.; Gelatt Jr, C. D.; Vecchi, M. P. (1983). "Optimization by Simulated Annealing". Science 220 (4598): 671–680. Bibcode:1983Sci...220..671K. doi:10.1126/science.220.4598.671. JSTOR 1690046. PMID 17813860.
10
Simulated Annealing Algorithm: Basic Structure
• The basic simulated annealing algorithm can be described as an iterative procedure composed of two loops: an outer loop and a nested inner loop.
• The inner loop simulates the goal of attaining thermal equilibrium at a given temperature. This is called the “Thermal Equilibrium Loop”.
• The outer loop performs the cooling process, in which the temperature is decreased from its initial value towards zero until a certain termination criterion is achieved and the search is stopped. This is called the “Cooling Loop”.
• The algorithm starts by initializing several parameters:
– The initial temperature is set to a very high value (to mimic the temperature setting of the initial natural annealing process). This is needed to allow the algorithm to search a wide breadth of solutions initially.
– The initial solution is created. Usually, the initial solution is chosen randomly.
– The number times to go through the inner loop and the outer loop is set.
11
• Each time the Thermal Equilibrium Loop (i.e., the inner loop) is called, it is run with a constant temperature, and the goal of the inner loop is to find “a best” solution for the given temperature to attain thermal equilibrium.
• Each iteration through the Thermal Equilibrium Loop, the algorithm performs the following:
– A small random perturbation of the currently held, best-so-far solution is made to create a new candidate solution. The objective is to search the solution space for the “best” candidate solution for the given temperature. Since the algorithm does not know which direction to search, it picks a random direction by randomly perturbing the current solution.
– The “goodness” of a solution is quantified by a cost function. For example, the cost function for the TSP is the sum of the distances between each successive city in the solution list of cities to visit.
– We can think of the solution space by imagining a cost surface in hyperspace. Each point on the surface represents a cost of a candidate solution, and ethe algorithm wants to go to the minimum point of that cost surface.
– A small random perturbation is made to the current solution because it is believed that good solutions are generally close to each other. But, this is not guaranteed to be true all of the time, because it depends on the problem and the distribution of solutions in the solution space.
12
• Each iteration through the Thermal Equilibrium Loop, continued:
– Sometimes, the random perturbation results in a better solution:• If the cost of the new candidate solution is lower than the cost of the previous solution, i.e., the
random perturbation results in a better solution, then the new solution is kept and replaces the previous solution.
– Sometimes, the random perturbation results in a worse solution, in which case the algorithm makes a decision to keep or discard this worse solution.• The decision outcome depends on an evaluation of a probability function, which depends on the
temperature of the current loop.
• The higher is the temperature, the more likely the algorithm will keep a worse solution.
• Keeping a worse solution is done to allow the algorithm to explore the solution space and to keep it from being trapped in a local minima.
• The lower is the temperature, the less likely the algorithm will keep a worse solution.
• Discarding a worse solution allows the algorithm to exploit a local optimum, which might be the global optimum.
13
• Each iteration through the Thermal Equilibrium Loop, continued:
– The decision outcome depends on an evaluation of an estimation [1] of the Boltzmann's probability density function:
The Boltzmann′s probability fun∆∆∆
– For a real annealing process, the is the change in energy of the atoms from the previous temperature state and the current temperature state, where the energy is given by the potential and kinetic energy of the atoms in the substance at the given temperature of the state.
– For simulated annealing, this may be estimated by the change in the cost function corresponding to the difference between cost of the previously found best solution at its temperature state and the cost of the new candidate solution at the current temperature state.
– The Boltzmann's constant may be estimated by the average cost function taken over all iterations of the inner loop. Therefore, ∆ ∆
14
• Each iteration through the Thermal Equilibrium Loop, continued:
– The decision outcome depends on an evaluation of an estimation [1] of the Boltzmann's probability density function:
∆∆– As can bee seen, this probability is proportional to the temperature and inversely proportional to the
normalized change in cost ∆∆ of the current solution.
15
• Each iteration through the Thermal Equilibrium Loop, continued:
– Thus, in one iteration of the inner loop, the algorithm will either find and keep a better solution, keep a worse solution with probability , or make no change and keep the previously found best solution.
• The algorithm continues running the inner loop and the above procedure for a number of times.
• After running the inner loop many times, where in each loop it takes on a new better solution, or takes on a worse solution, or keeps the previously found best solution, the algorithm may be viewed as taking a random walk in the solution space, looking for a stable sub-optimal solution for the given temperature of the inner loop.
• After having found a stable sub-optimal solution for the given temperature of the inner loop, the process is said to have reached thermal equilibrium.
• At this point the inner loop completes, and the algorithm goes to the outer loop.
16
• The outer loop (i.e., the Cooling Loop) performs the following:
– The currently best solution is recorded as the optimal solution.
– The temperature is decreased, according to some schedule.
• The initial temperature is set to a very high value (to mimic the temperature setting of the initial natural annealing process). This is needed to allow the algorithm to search a wide breadth of solutions initially.
• The final temperature should be set to some low value to prevent the algorithm from accepting worse solutions at the late stages of the process.
– The number of outer loops is decremented.
– If the number outer loops hasn’t reached zero, then the inner loop is called once again; otherwise the algorithm terminates.
17
Equilibrium Loop: The temperature is held constant, while the
system reaches equilibrium, i.e., until the “best” route if found for the given temperature.
Compute Distance of Initial Route
Compute Distance of Route
?Worse Route?
Find Prob of Acceptance,
Y
genRand #
Y
?
Set #nCL,
iTemp.
Get Initial Route
Set #nEL
# nEL==0?
N
Dec # nEL
N
Reduce Temperature
Y
N
Dec # nCL
# nCLs==0?
NDoneY
perturbRoute
nCL=numCoolingLoopsnEL=numEquilibriumLoops
18
Next Step• Application of simulated annealing to the Traveling
Salesman problem.
19
• Choosing a temperature schedule.
• Initially, need obtain a random list of cities (route).– Can do using by permuting a given list.
• Perturbing a city list.– Many ways to do this, but will show a simple way.
• Computing the cost function– I.e., computing the distance of a route.– Presents two different methods.
• Basic Simulated Annealing flow chart.
• MatLab code for the flow chart.
• Results for the 29 City data set.
20
Choosing a Temperature Schedule
• There are many different ways in which to choose a temperature schedule.
21
• Set the probability level, , that you would like to have at the beginning of the optimization that a worse design could be accepted.
• Do the same for the probability level at the end of the optimization, .
• Then, if we assume , (which is clearly true at the start of the optimization), then:
22
Choosing How to Decrease • Select the total number of cycles .
• For each Cooling Loop (outer loop) the temperature can be decreased as follows:
• is the temperature for the next cycle and isthe current temperature.
23
• Example:
• N=100
• Therefore,
24
Generate Initial City Route List• Requirements
– Write MatLab code to generate a list of cities, where• Cities are chosen according to a uniform random distribution• No city appears more than once• All cities are included in the list.
• Options– Write custom routine to generate a list of cities– Use Matlab’s library routine randperm.
25
Generate Initial City Route(Using Matlab Library randperm)
• P = randperm(N) returns a vector containing a random permutation of the integers 1:N.
• For example, randperm(6) might be [2 4 5 6 1 3].
• Therefore, to get an initial city route:
cityRoute = randperm(10);g=sprintf('%d ', cityRoute);fprintf('City route before = %s\n', g);cityRoute = randperm(10);g=sprintf('%d ', cityRoute);fprintf('City route after = %s\n', g);
Example Output:
City route before = 1 9 7 5 4 10 8 6 3 2 City route after = 1 6 2 3 5 9 4 8 10 7
26
Generate Initial City Route(Using Custom Function)
• The city list is stored in a vector of -components, where is the number of cities.
• One potential way to permute the list:
– Starting from the first component in the list, for every component of the list, choose a component randomly.
– Swap the positions of these two components in the list.
for i=1:numCitiesrandIndex = randi(numCities);temp = cities(i);cities(i) = cities(randIndex);cities(randIndex) = temp;
end
• This will create the same random permutation each time it is run. A different seed is required to create different permutation.
27
Generate Initial City Route(Using Custom Function)
cityRoute = randperm(10);g=sprintf('%d ', cityRoute);fprintf('City route before = %s\n', g);numCities = size(cityRoute',1);for i=1:numCities
randIndex = randi(numCities);temp = cityRoute(i);cityRoute(i) = cityRoute(randIndex);cityRoute(randIndex) = temp;
endg=sprintf('%d ', cityRoute);fprintf('City route after = %s\n', g);
Example Output:
City route before = 4 2 6 7 10 3 8 1 5 9 City route after = 7 3 4 1 10 2 6 9 5 8
28
Perturbing the City Route List• For each iteration of the Equilibrium loop, the SA algorithm needs to
perturb the city route list (cityRoute).
• There are many ways in which the cityRoute may be perturbed.
• One potential way– Choose two indices of the list at random, using a uniform distribution– Swap the positions of these two cities in the list.– For example:– If indices 1 and 3 were chosen at random, then city 3 would swap position with
city 4.
3 2 4 1 5
1 2 3 4 5Index
City
Before
4 2 3 1 5
1 2 3 4 5Index
City
After
29
Perturbing the City Route ListcityRoute = randperm(5);randIndex1 = randi(5);alreadyChosen = true;while alreadyChosen == true
randIndex2 = randi(5);if randIndex2 ~= randIndex1
alreadyChosen = false;end
endfprintf('Random index 1 = %d\n', randIndex1);fprintf('Random index 2 = %d\n', randIndex2);
g=sprintf('%d ', cityRoute);fprintf('City route before = %s\n', g);dummy = cityRoute(randIndex1);cityRoute(randIndex1) = cityRoute(randIndex2);cityRoute(randIndex2) = dummy;g=sprintf('%d ', cityRoute);fprintf('City route after = %s\n', g);
Example Output:
Random index 1 = 4Random index 2 = 5
City route before = 2 3 5 1 4 City route after = 2 3 5 4 1
30
• A city-distance data set may be represented in many ways.
• Examples:1. Two dimensional array, where each element denotes
the distance between city and city .
2. Each entry in a list represents the Euclidean coordinates of the city. So, you need to compute the distances.
d(5,1) d(5,2) d(5,3) d(5,4) d(5,5)d(4,1) d(4,2) d(4,3) d(4,4) d(4,5)d(3,1) d(3,2) d(3,3) d(3,4) d(3,5)d(2,1) d(2,2) d(2,3) d(2,4) d(2,5)d(1,1) d(1,2) d(1,3) d(1,4) d(1,5)
31
• Assume the data set is a two dimensional array, where each element denotes the distance between city and city .
• Let represent the city list route, where is a list of city indices.
• The cost function is the round trip distance :
32
Computing Route CostExample
• The cost function is the round trip distance :
3 2 4 1 5
1 2 3 4 5Index
d(5,1) d(5,2) d(5,3) d(5,4) d(5,5)
d(4,1) d(4,2) d(4,3) d(4,4) d(4,5)
d(3,1) d(3,2) d(3,3) d(3,4) d(3,5)
d(2,1) d(2,2) d(2,3) d(2,4) d(2,5)
d(1,1) d(1,2) d(1,3) d(1,4) d(1,5)
33
Computing Route CostExample
• Now assume the data is stored in a one dimensional vector, and the distances are symmetric. For example:
d(1,1) = 0.00d(1,2) = d(2,1)d(1,3) = d(3,1)d(1,4) = d(4,1)d(1,5) = d(5,1)d(2,1) = d(1,2)d(2,2) = 0.00d(2,3) = d(3,2)d(2,4) = d(4,2)d(2,5) = d(5,2)d(3,1) = d(1,3)d(3,2) = d(2,3)d(3,3) = 0.00d(3,4) = d(4,3)d(3,5) = d(5,3)d(4,1) = d(1,4)d(4,2) = d(2,4)d(4,3) = d(3,4)d(4,4) = 0.00d(4,5) = d(5,4)d(5,1) = d(1,5)d(5,2) = d(2,5)d(5,3) = d(3,5)d(5,4) = d(4,5)d(5,5) = 0.00
d(5,1) d(5,2) d(5,3) d(5,4) d(5,5)d(4,1) d(4,2) d(4,3) d(4,4) d(4,5)d(3,1) d(3,2) d(3,3) d(3,4) d(3,5)d(2,1) d(2,2) d(2,3) d(2,4) d(2,5)d(1,1) d(1,2) d(1,3) d(1,4) d(1,5)
34
Computing Route CostExample
3 2 4 1 5
1 2 3 4 5Index
City
d(1,1) = 0.00d(1,2) = d(2,1)d(1,3) = d(3,1)d(1,4) = d(4,1)d(1,5) = d(5,1)
d(2,1) = d(1,2)d(2,2) = 0.00d(2,3) = d(3,2)d(2,4) = d(4,2)d(2,5) = d(5,2)
d(3,1) = d(1,3)d(3,2) = d(2,3)d(3,3) = 0.00d(3,4) = d(4,3)d(3,5) = d(5,3)
d(4,1) = d(1,4)d(4,2) = d(2,4)d(4,3) = d(3,4)d(4,4) = 0.00d(4,5) = d(5,4)
d(5,1) = d(1,5)d(5,2) = d(2,5)d(5,3) = d(3,5)d(5,4) = d(4,5)d(5,5) = 0.00
35
Computing Route CostExample
3 2 4 1 5
1 2 3 4 5Index
City
d(1,1) = 0.00d(1,2) = 12.00d(1,3) = 13.00d(1,4) = 14.00d(1,5) = 15.00
d(2,1) = 12.00d(2,2) = 0.00d(2,3) = 23.00d(2,4) = 24.00d(2,5) = 25.00
d(3,1) = 13.00d(3,2) = 23.00d(3,3) = 0.00d(3,4) = 34.00d(3,5) = 35.00
d(4,1) =14.00d(4,2) = 24.00d(4,3) = 34.00d(4,4) = 0.00d(4,5) =45.00
d(5,1) = 15.00d(5,2) = 25.00d(5,3) = 35.00d(5,4) = 45.00d(5,5) = 0.00
00.0012.0013.0014.0015.0012.0000.0023.0024.0025.0013.0023.0000.0034.0035.0014.0024.0034.0000.0045.0015.0025.0035.0045.0000.00
36
• The cost function, which extracts city distances from a 1-dimensional file:
3 2 4 1 5
1 2 3 4 5Index
City
00.0012.0013.0014.0015.0012.0000.0023.0024.0025.0013.0023.0000.0034.0035.0014.0024.0034.0000.0045.0015.0025.0035.0045.0000.00
37
Computing Route CostMatLab Code with Example
cityRoute = [3 2 4 1 5];Distances = load('5x5Symmetric.txt');D=0; n=5;for i=1:n-1
D = D + Distances((cityRoute(i)-1)*n+cityRoute(i+1));endD = D + Distances((cityRoute(n)-1)*n+cityRoute(1));
3 2 4 1 5
1 2 3 4 5Index
cityRoute
00.0012.0013.0014.0015.0012.0000.0023.0024.0025.0013.0023.0000.0034.0035.0014.0024.0034.0000.0045.0015.0025.0035.0045.0000.00
Distances
Output:D = 111
38
Computing Route CostEuclidean Distance Format (TSPLIB)• The TSPLIB is a standard format for
representing cities for the travelling salesman problem (TSP).
• Each entry in the file denotes the Euclidean coordinates of the city.
• To determine the distance of a TSP route, the Euclidean metric is used. For example:
Example: a 10x2 cityCoords (cC) array, that holds the coordinates for each city in a route.
20900.0000 17066.666721300.0000 13016.666721600.0000 14150.000021600.0000 14966.666721600.0000 16500.000022183.3333 13133.333322583.3333 14300.000022683.3333 12716.666723616.6667 15866.666723700.0000 15933.3333
cC cC cC cC
cC cC cC cC
City123456789
10
x-coordinate y-coordinate
39
Computing Route CostMatlab Code with Example
cC = load('EUC_2D_29.txt');D=0; n=29;for i=1:n-1
D = D + sqrt((cC(i,1)- cC(i+1,1))^2 + (cC(i,2)- cC(i+1,2))^2);
endD = D + sqrt((cC(n,1)- cC(1,1))^2 + (cC(n,2)-
cC(1,2))^2);D
20833.3333 17100.000020900.0000 17066.666721300.0000 13016.666721600.0000 14150.000021600.0000 14966.666721600.0000 16500.000022183.3333 13133.333322583.3333 14300.000022683.3333 12716.666723616.6667 15866.666723700.0000 15933.333323883.3333 14533.333324166.6667 13250.000025149.1667 12365.833326133.3333 14500.000026150.0000 10550.000026283.3333 12766.666726433.3333 13433.333326550.0000 13850.000026733.3333 11683.333327026.1111 13051.944427096.1111 13415.833327153.6111 13203.333327166.6667 9833.333327233.3333 10450.000027233.3333 11783.333327266.6667 10383.333327433.3333 12400.000027462.5000 12992.2222
EUC_2D_29.txt
Matlab output:D = 5.2284e+04
40
• Usually in TSP problems, the city route is stored in city index format, rather than coordinate format.
• For example, cityRoute (cR): 9 18 16 7 12 4 10 15 24 27 17 11 25 6 20 22 14 21 28 29 13 5 1 23 19 3 2 26 8
• An entry in the route denotes the index of the city.
• The index references the distance file.
• To determine the distance of the route stored in the index format, we must use the index in the route array to extract the coordinates from the input file.
• For example, let the file array be called cC and the route array be called cR:
cC cC cC cC
cC cC cC cC
20833.3333 17100.000020900.0000 17066.666721300.0000 13016.666721600.0000 14150.000021600.0000 14966.666721600.0000 16500.000022183.3333 13133.333322583.3333 14300.000022683.3333 12716.666723616.6667 15866.666723700.0000 15933.333323883.3333 14533.333324166.6667 13250.000025149.1667 12365.833326133.3333 14500.000026150.0000 10550.000026283.3333 12766.666726433.3333 13433.333326550.0000 13850.000026733.3333 11683.333327026.1111 13051.944427096.1111 13415.833327153.6111 13203.333327166.6667 9833.333327233.3333 10450.000027233.3333 11783.333327266.6667 10383.333327433.3333 12400.000027462.5000 12992.2222
EUC_2D_29.txt
41
Equilibrium Loop: The temperature is held constant, while the
system reaches equilibrium, i.e., until the “best” route if found for the given temperature.
Compute Distance of Initial Route
Compute Distance of Route
?Worse Route?
Find Prob of Acceptance,
Y
genRand #
Y
?
Set #nCL,
iTemp.
Get Initial Route
Set #nEL
# nEL==0?
N
Dec # nEL
N
Reduce Temperature
Y
N
Dec # nCL
# nCLs==0?
NDoneY
perturbRoute
nCL=numCoolingLoopsnEL=numEquilibriumLoops
42
clc; clear; close all;cC = load('EUC_2D_29.txt');numCities = size(cC,1);x=cC(1:numCities, 1); x(numCities+1)=cC(1,1);y=cC(1:numCities, 2); y(numCities+1)=cC(1,2);
figure hold onplot(x',y','.k','MarkerSize',14)labels = cellstr( num2str([1:numCities]') );text(x(1:numCities)', y(1:numCities)', labels, ...
'VerticalAlignment','bottom', ...'HorizontalAlignment','center');
ylabel('Y Coordinate', 'fontsize', 18, 'fontname', 'Arial');xlabel('X Coordinate', 'fontsize', 18, 'fontname', 'Arial');title('City Coordinates', 'fontsize', 20, 'fontname', 'Arial');
43
numCoolingLoops = 1100;numEquilbriumLoops = 100;pStart = 0.6; % Probability of accepting worse solution at the startpEnd = 0.001; % Probability of accepting worse solution at the endtStart = -1.0/log(pStart); % Initial temperaturetEnd = -1.0/log(pEnd); % Final temperaturefrac = (tEnd/tStart)^(1.0/(numCoolingLoops-1.0));% Fract temp reductioncityRoute_i = randperm(numCities); % Get initial routecityRoute_b = cityRoute_i; % Best routecityRoute_j = cityRoute_i; % Current routecityRoute_o = cityRoute_i; % Optimal route% Initial distancesD_j = computeEUCDistance(numCities, cC, cityRoute_i);D_o = D_j; D_b = D_j ; D(1) = D_o;numAcceptedSolutions = 1.0;tCurrent = tStart; % Current temperature = initial temperatureDeltaE_avg = 0.0; % DeltaE Average
44
for i=1:numCoolingLoops
disp(['Cycle: ',num2str(i),' starting temp: ',num2str(tCurrent)])
for j=1:numEquilbriumLoops
cityRoute_j = perturbRoute(numCities, cityRoute_b);
D_j = computeEUCDistance(numCities, cC, cityRoute_j);
DeltaE = abs(D_j-D_b);
if (D_j > D_b) % objective function is worse
if (i==1 && j==1) DeltaE_avg = DeltaE; end
p = exp(-DeltaE/(DeltaE_avg * tCurrent));
if (p > rand()) accept = true; else accept = false; end
else accept = true; % objective function is better
end
45
if (accept==true)
cityRoute_b = cityRoute_j;
D_b = D_j;
numAcceptedSolutions = numAcceptedSolutions + 1.0;
DeltaE_avg = (DeltaE_avg * (numAcceptedSolutions-1.0) + ...
DeltaE) / numAcceptedSolutions;
end
end
tCurrent = frac * tCurrent; % Lower temp for next cooling cycle
cityRoute_o = cityRoute_b; % Record the best route
D(i+1) = D_b; % Record each route distance
D_o = D_b;
end
46
47
Best Route Distance Found:29,702.3 m
29,702.3
48
49
[1] J. D. Hedengren, "Optimization Techniques in Engineering," 5 April 2015. [Online]. Available: http://apmonitor.com/me575/index.php/Main/HomePage. [Accessed 27 April 2015].
[2] A. R. Parkinson, R. J. Balling and J. D. Heden, "Optimization Methods for Engineering Design Applications and Theory," Brigham Young University, 2013.