novel nature inspired population-based metaheuristic ... · of routing in mobile ad hoc networks....
TRANSCRIPT
NOVEL NATURE INSPIRED POPULATION-BASED METAHEURISTIC
ALGORITHMS FOR OPTIMISATION
Rethishkumar S 1, Dr.R.Vijayakumar
2
1 Research Scholar, School of Computer Sciences, Mahatma Gandhi University, Kottayam, Kerala
Professor, School of Computer Sciences, Mahatma Gandhi University, Kottayam, Kerala.
Abstract-Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-
inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical
background and practical implementation, complements extensive literature with well-chosen case studies to
illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms,
simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search,
algorithm analysis, constraint handling, hybrid methods, parameter tuning and control, as well as multi-objective
optimization. This book can serve as an introductory book for graduates, doctoral students and lecturers in
computer science, engineering and natural sciences. It can also serve a source of inspiration for new applications.
Researchers and engineers as well as experienced experts will also find it a handy reference.
Keywords- Ant Colony, Honey Bee lane, Mosquito flying behaviour, Moth flying, Bat follow, Firefly trail,
Frog leaping, Cuckoo search, Fish flow, Bacterial Foraging, Flower pollination and Genetic Algorithms etc
naturally inspired algorithms and particle swarm optimization algorithms.
I. INTRODUCTION
Nature is a great source of inspiration for solving complex problems in networks. It helps to find the optimal
solution. Metaheuristic algorithm is one of the nature-inspired algorithm which helps in solving routing problem
in networks. The dynamic features, changing of topology frequently and limited bandwidth make the routing,
challenging in MANET. Implementation of appropriate routing algorithms leads to the efficient transmission of
data in mobile ad hoc networks. The algorithms that are inspired by the principles of naturally-
distributed/collective behavior of social colonies have shown excellence in dealing with complex optimization
problems. Thus some of the bio-inspired metaheuristic algorithms help to increase the efficiency of routing in ad
hoc networks. This survey work presents the overview of bio-inspired metaheuristic which support the efficiency
of routing in mobile ad hoc networks.
II. TYPES OF NATURE-INSPIRED ALGORITHMS
Latest developed and widely used bio inspired metaheuristic pollination algorithm for optimisation problem
is describes below. This revels a new era in Computer science for finding an optimum solution classification and
clustering functions. According to the context, best suited algorithms can be used for optimisation problem.
(i). Ant Colony optimization
ACO is a population-based metaheuristic that can be used to find approximate solutions to difficult
optimization problems. In ACO, a set of software agents called artificial ants search for good solutions to a
given optimization problem [1]. To apply ACO, the optimization problem is transformed into the problem of
finding the best path on a weighted graph. The artificial ants (hereafter ants) incrementally build solutions by
moving on the graph. The solution construction process is stochastic and is biased by a pheromone model, that
is, a set of parameters associated with graph components (either nodes or edges) whose values are modified at
runtime by the ants.
The first step for the application of ACO to a combinatorial optimization problem (COP) consists in defining a
model of the COP as a triplet (S,Ω,f) , where:
S is a search space defined over a finite set of discrete decision variables;
Ω is a set of constraints among the variables; and
f:S→R+0 is an objective function to be minimized (as maximizing over f is the same as minimizing
over −f , every COP can be described as a minimization problem).
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 352
The search space S is defined as follows. A set of discrete variables Xi , i=1,…,n , with
values vji∈Di={v1i,…,v|Di|i} , is given. Elements of S are full assignments, that is, assignments in which each
variable Xi has a value vji assigned from its domain Di . The set of feasible solutions SΩ is given by the elements
of Sthat satisfy all the constraints in the set Ω .
A solution s∗∈SΩ is called a global optimum if and only if
f(s∗)≤f(s) ∀s∈SΩ .
The set of all globally optimal solutions is denoted by S∗Ω⊆SΩ . Solving a COP requires finding at least
one s∗∈S∗Ω .
In ant colony optimization metaheuristic, artificial ants build a solution to a combinatorial optimization problem
by traversing a fully connected construction graph, defined as follows. First, each instantiated decision
variable Xi=vji is called a solution component and denoted by cij . The set of all possible solution components is
denoted by C . Then the construction graph GC(V,E) is defined by associating the components C either with the
set of vertices V or with the set of edges E. A pheromone trail value τij is associated with each
component cij . (Note that pheromone values are in general a function of the algorithm's iteration t:τij=τij(t) .)
Pheromone values allow the probability distribution of different components of the solution to be modelled.
Pheromone values are used and updated by the ACO algorithm during the search [2].
The ants move from vertex to vertex along the edges of the construction graph exploiting information provided
by the pheromone values and in this way incrementally building a solution. Additionally, the ants deposit a
certain amount of pheromone on the components, that is, either on the vertices or on the edges that they traverse.
The amount Δτ of pheromone deposited may depend on the quality of the solution found. Subsequent ants utilize
the pheromone information as a guide towards more promising regions of the search space.
The ACO metaheuristic is:
Set parameters, initialize pheromone trails
SCHEDULE_ACTIVITIES
ConstructAntSolutions
DaemonActions {optional}
UpdatePheromones
END_SCHEDULE_ACTIVITIES
The metaheuristic consists of an initialization step and of three algorithmic components whose activation is
regulated by the SCHEDULE_ACTIVITIES construct. This construct is repeated until a termination criterion is
met. Typical criteria are a maximum number of iterations or a maximum CPU time.
(ii). Honey Bee Optimisation Algorithm
In nature, honey bees have several complicated behaviors such as mating, breeding and foraging. These
behaviors have been mimicked for several honey bee based optimization algorithms. One of the famous mating
and breeding behavior of honey bees inspired algorithm is Marriage in Honey Bees Optimization (MBO). The
algorithm starts from a single queen without family and passes on to the development of a colony with family
having one or more queens. In the literature, several versions of MBO have been proposed such as Honey-Bees
Mating Optimization (HBMO), Fast Marriage in Honey Bees Optimization (FMHBO) and The Honey-Bees
Optimization (HBO).
The other type of bee-inspired algorithms mimics the foraging behavior of the honey bees [3]. These algorithms
use standard evolutionary or random explorative search to locate promising locations. Then the algorithms utilize
the exploitative search on the most promising locations to find the global optimum. The following algorithms
were inspired from foraging behavior of honey bees; Bee System (BS), Bee Colony Optimization (BCO),
Artificial Bee Colony (ABC) and The Bees Algorithm (BA). Bee System is an improved version of the Genetic
Algorithm (GA). The main purpose of the algorithm is to improve local search while keeping the global search
ability of GA.
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 353
Bee Colony Optimization (BCO) was proposed to solve combinatorial optimization problems by BCO has two
phases called forward pass and backward pass. A partial solution is generated in the forward pass stage with
individual exploration and collective experience, which will then be employed at the backward pass stage. In the
backward pass stage the probability information is utilized to make the decision whether to continue to explore
the current solution in the next forward pass or to start the neighborhood of the new selected ones. The new one
is determined using probabilistic techniques such as the roulette wheel selection.
The algorithm consists of the following bee groups: employed bees, onlooker bees and scout bees as in nature.
Employed bees randomly explore and return to the hive with information about the landscape. This explorative
search information is shared with onlooker bees. The onlooker bees evaluate this information with a probabilistic
approach such as the roulette wheel method to start a neighborhood search. Meanwhile, the scout bees perform a
random search to carry out the exploitation.
The Bees Algorithm, which is very similar to the ABC in the sense of having local search and global search
processes [4]. However there is a difference between both algorithms during the neighborhood search process.
As mentioned above, ABC has a probabilistic approach during the neighborhood stage; however the Bees
Algorithm does not use any probability approach, but instead uses fitness evaluation to drive the search. In the
following section the Bees Algorithm will be explained in detail.
The bees algorithm consists of an initialisation procedure and a main search cycle which is iterated for a given
number T of times, or until a solution of acceptable fitness is found [5]. Each search cycle is composed of five
procedures: recruitment, local search, neighbourhood shrinking, site abandonment, and global search.
Pseudocode for the standard bees algorithm
1 for i=1,…,ns
i scout[i]=Initialise_scout()
ii flower_patch[i]=Initialise_flower_patch(scout[i])
2 do until stopping_condition=TRUE
i Recruitment()
ii for i =1,...,nb
1 flower_patch[i]=Local_search(flower_patch[i])
2 flower_patch[i]=Site_abandonment(flower_patch[i])
3 flower_patch[i]=Neighbourhood_shrinking(flower_patch[i])
iii for i = nb,...,ns
1 flower_patch[i]=Global_search(flower_patch[i])}
In the initialisation routine ns scout bees are randomly placed in the search space, and evaluate the fitness of the
solutions where they land. For each solution, a neighbourhood (called flower patch) is delimited.
(iii). Mosquito flying optimisation
Mosquito Swarm Algorithm (MSA) is defined as a meta-heuristics algorithm based on the social
behavior of mosquito swarm. Mosquitoes (gnats) have sensors designed to track their prey [6].
A) By utilizing the Chemical sensing capacity, the mosquitoes can sense carbon dioxide and lactic acid up
to 36 meters away. Mammals and birds give off these gases as part of their normal breathing. Certain
chemicals in sweat also seem to attract mosquitoes.
B) By utilizing the Heat sensing capacity, the Mosquitoes can detect heat, so they can find warm-blooded
mammals and birds very easily once they get close enough. A mosquito swarm exists close to areas
with standing water.
C) C) They go to their pray through flying motion and slide around it to find the most favourable point.
This combined effort of flying and sliding motion of the mosquitoes has been imitated in the present
algorithm.
This algorithm has been used to find cluster centre among the high and low accuracy prediction values. The main
aid of this algorithm is to cluster the high and low accuracy values and choose the high accuracy value for kernel
function [7]. The step by step process of this algorithm is given below
Input: n-number of mosquitoes (i.e kernel parameters)
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 354
1. A Mosquito Population Initialized with Chemical Sensors (CS) and Heat Sensors (HS). // here the CS
defined the fuzzification parameter and HS defined the number of iterations
2. The initial locations (x) of the mosquitoes (n) generated.
3. The temperature (t) and Maximum Temperature (T) are initialized. // the temperature has been
considered as maximum accuracy
4. Find the fitness function based on the maximum temperature
5. Applying the sliding motion to find fitness value
6. Repeated the mosquitoes by parallel or distributed processing
7. Maximum temperature repeat
8. New solutions Generated by adjusting the HS and updating the locations (x).
9. Verified and assigned the feasibility of the solution by the CS.
10. Applying the flying motion
11. Evaluated the new particles
12. Modified the fitness function if new fitness is better
13. The best solution (S) is Selected.
14. While t < T // Maximum Temperature
15. While (n total of mosquitoes)
The best solutions are displayed. Based on the above description of mosquito swarm process, the proposed
kernel parameters are optimized. It improved the classification accuracy [8].
(iv). Moth flame optimization
Moth-Flame Optimization (MFO) algorithm was proposed in 2016 as one of the seminal attempt to
simulate the navigation of moths in computer and propose an optimization algorithm [9]. This algorithm has
been widely used in science and industry. In the MFO algorithm, it is assumed that the candidate solutions are
moths and the problem’s variables are the position of moths in the space. Therefore, the moths can fly in 1-D, 2-
D, 3-D, or hyper dimensional space with changing their position vectors. Since the MFO algorithm is a
population-based algorithm.
It should be noted here that moths and flames are both solutions. The difference between them is the way we
treat and update them in each iteration. The moths are actual search agents that move around the search space,
whereas flames are the best position of moths that obtains so far. In other words, flames can be considered as
flags or pins that are dropped by moths when searching the search space. Therefore, each moth searches around a
flag (flame) and updates it in case of finding a better solution. With this mechanism, a moth never lose its best
solution. A logarithmic spiral has been chosen as the main update mechanism of moths in this paper. However,
any types of spiral can be utilized here subject to the following conditions: Spiral’s initial point should start from
the moth Spiral’s final point should be the position of the flame Fluctuation of the range of spiral should not
exceed from the search space Considering these points, we define a logarithmic spiral for the MFO algorithm
[10].
where indicates the distance of the moth for the flame, is a constant for defining the
shape of the logarithmic spiral, and t is a random number in [-1,1].
The spiral flying path of moths is simulated. As may be seen in this equation, the next position of a moth is
defined with respect to a flame. The t parameter in the spiral equation defines how much the next position of the
moth should be close to the flame (t = -1 is the closest position to the flame, while t = 1 shows the farthest).
Therefore, a hyper ellipse can be assumed around the flame in all directions and the next position of the moth
would be within this space. Spiral movement is the main component of the proposed method because it dictates
how the moths update their positions around flames. The spiral equation allows a moth to fly “around” a flame
and not necessarily in the space between them. Therefore, the exploration and exploitation of the search space
can be guaranteed.
The pseudo code of the WOA algorithm is presented below:
Update the number of flames (FlameNumber)
Initialise the population of moths
Calculate the objective values
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 355
for all moths
for all paramters
update r and t
Calculate D with respect to the corresponding moth
Update the matrix M with respect to the corresponding moth
end calculate the objective values
Update flames
end
(v). Bat algorithm
BA is a recent optimization algorithm based on swarm intelligence and inspiration from the
echolocation behavior of bats. One of the issues in the standard bat algorithm is the premature convergence that
can occur due to the low exploration ability of the algorithm under some conditions [11]. To overcome this
deficiency, directional echolocation is introduced to the standard bat algorithm to enhance its exploration and
exploitation capabilities. In addition to such directional echolocation, three other improvements have been
embedded into the standard bat algorithm to enhance its performance. The new proposed approach, namely the
directional Bat Algorithm (dBA), has been then tested using several standard and non-standard benchmarks from
the CEC2005 benchmark suite. The performance of dBA has been compared with ten other algorithms and BA
variants using non-parametric statistical tests. The statistical test results show the superiority of the directional
bat algorithm
The Bat algorithm is a metaheuristic algorithm for global optimization. It was inspired by the echolocation
behaviour of microbats, with varying pulse rates of emission and loudness. Search is intensified by a
local random walk. Selection of the best continues until certain stop criteria are met. This essentially uses a
frequency-tuning technique to control the dynamic behaviour of a swarm of bats, and the balance between
exploration and exploitation can be controlled by tuning algorithm-dependent parameters in bat algorithm.
Bat algorithm is a swarm-intelligence-based algorithm, inspired by the echolocation behavior of microbats. BA
automatically balances exploration (long-range jumps around the global search space to avoid getting stuck
around one local maximum) with exploitation (searching in more detail around known good solutions to find
local maxima) by controlling loudness and pulse emission rates of simulated bats in the multi-dimensional
search space.
A detailed introduction of metaheuristic algorithms including the bat algorithm is given by Yang where a demo
program in MATLAB/GNU Octave is available, while a comprehensive review is carried out by Parpinelli and
Lopes. A further improvement is the development of an evolving bat algorithm (EBA) with better efficiency.
Recently, bat-inspired algorithm has been proposed as a population-based algorithm to mimic the echolocation
system involved in micro-bat. The main drawback of bat-inspired algorithm is its inability to preserve the
diversity during the search and thus the prematurity can take place. The sensitivity analysis for the main
parameters of island bat-inspired algorithm is well-studied to show their effect on the convergence properties.
For comparative evaluation, island bat-inspired algorithm is compared with 17 competitive methods and shows
very successful outcomes. Furthermore, the proposed algorithm is applied for three real-world cases of economic
load dispatch problem where the results obtained prove considerable efficiency in comparison with other state-
of-the-art methods.
(vi). Firefly algorithm
The firefly algorithm (FA) is one of the latest swarm intelligence algorithms created by Yang. In the
work presented by Gao et al. (2015), an improved particle filter based on FA is proposed to solve a main
handicap of the particle filter. The particles in the particle filter are optimized using FA before resampling [12].
As is known, particle filter algorithm has been proven to be a powerful tool in solving visual tracking problems.
However, the problem of sample impoverishment that is brought by the procedure of resampling is a main
obstacle of the particle filter. Thus, the proposed method in this work is to increase the number of meaningful
particles, and the particles can approximate the true state of the target more accurately.
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 356
FA is classified as swarm intelligent, metaheuristic and nature-inspired, and it is developed by Yang in 2008 by
animating the characteristic behaviors of fireflies. In fact, the population of fireflies show characteristic luminary
flashing activities to function as attracting the partners, communication, and risk warning for predators. As
inspiring from those activities, Yang formulated this method under the assumptions of all fireflies are unisexual
such that all fireflies has attracting potential for each other and the attractiveness is directly proportionate to the
brightness level of individuals [13]. Hence, the brighter fireflies attract to the less brighter ones to move toward
to them, besides that in the case of no fireflies brighter than a certain firefly then it moves randomly.
In the formulation of firefly algorithm, the objective function is associated with flashing light characteristics of
the firefly population. Considering the physical principle of the light intensity, it is inversely quadratic
proportional to the square of the area, so that this principle enables to define fitting function for the distance
between any two fireflies. For the optimization of fitting function, the individuals are forced to systematic or
random moves in the population. In this way, it is ensured that all the fireflies move toward to more attractive
ones which have brighter flashing until the population converge to brightest one. Within this procedure, firefly
algorithm is executed by three parameters which are attractiveness, randomization, and absorption.
Attractiveness parameter is based on light intensity between two fireflies and defined with exponential functions.
When this parameter is set to zero, then it happens to the random walk corresponding to the randomization
parameter which is determined by Gaussian distribution principle as generating the number from the interval. On
the other hand, absorption parameters affect to the value of attractiveness parameters as changing from zero to
infinity. And, for the case of converging to the infinity, the move of fireflies appears as random walk.
In mathematical optimization, the firefly algorithm is a metaheuristic proposed by Xin-She Yang and inspired by
the flashing behavior of fireflies.
In pseudocode the algorithm can be stated as:
While (t < MaxGeneration)
for i = 1 : n (all n fireflies)
for j = 1 : i (n fireflies)
if ({\displaystyle I_{j}>I_{i}} ),
Vary attractiveness with distance r via {\displaystyle \exp(-\gamma \;r)} ;
move firefly i towards j;
Evaluate new solutions and update light intensity;
end if
end for j
end for i
Rank fireflies and find the current best;
end while
Post-processing the results and visualization;
End
In essence, FA uses the following three idealized rules:
1. Fireflies are unisexual so that one firefly will be attracted to other fireflies regardless of their sex.
2. The attractiveness is proportional to the brightness and they both decrease as their distance increases. Thus,
for any two flashing fireflies, the less brighter one will move toward the more brighter one. If there is no brighter
one than a particular firefly, it will move randomly.
3. The brightness of a firefly is determined by the landscape of the objective function.
The movement of a firefly i is attracted to another, more attractive (brighter) firefly j is determined by
xit+1=xit+β0e−γrij2(xjt−xit)+αεit
where β0 is the attractiveness at the distance r=0, and the second term is due to the attraction. The third term
is randomization with αbeing the randomization parameter, and εit is a vector of random numbers drawn from
a Gaussian distribution or uniform distribution at time t [14]. If β0=0, it becomes a simple random walk.
Furthermore, the randomization εit can easily be extended to other distributions such as Lévy flights.
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 357
(vii). Frog leaping algorithm
The shuffled frog-leaping algorithm (SFLA) has been developed for solving combinatorial optimization
problems. The SFLA is a population-based cooperative search metaphor inspired by natural memetics. The
algorithm contains elements of local search and global information exchange. The SFLA consists of a set of
interacting virtual population of frogs partitioned into different memeplexes. The virtual frogs act as hosts or
carriers of memes where a meme is a unit of cultural evolution. The algorithm performs simultaneously an
independent local search in each memeplex [15]. The local search is completed using a particle swarm
optimization-like method adapted for discrete problems but emphasizing a local search. To ensure global
exploration, the virtual frogs are periodically shuffled and reorganized into new memplexes in a technique
similar to that used in the shuffled complex evolution algorithm. In addition, to provide the opportunity for
random generation of improved information, random virtual frogs are generated and substituted in the
population.The algorithm has been tested on several test functions that present difficulties common to many
global optimization problems. The effectiveness and suitability of this algorithm have also been demonstrated by
applying it to a groundwater model calibration problem and a water distribution system design problem.
Compared with a genetic algorithm, the experimental results in terms of the likelihood of convergence to a
global optimal solution and the solution speed suggest that the SFLA can be an effective tool for solving
combinatorial optimization problems.
SFLA is a recently popular meta-heuristic based on the memetic evolution of a group of frogs when seeking for
the location that has the maximum amount of available food. Proposed in 2003, SFLA is the combination of the
merits of memetic algorithm (MA) and PSO. In SFLA, the population consists of a group of frogs (solutions)
that is partitioned into subsets (memeplexes). Different memeplexes, each performs a local search, are
considered as different cultures (memes) of frogs. The SFLA is described as follows: assume that the initial
population is formed by P randomly generated frogs. For L-dimensional problems (L variables), a frog i is
represented as Xi=(xi1, xi2,., xiL). Afterwards, the frogs are sorted in a descending order according to their
fitness. Then, the entire population is divided into m memeplexes, each containing n frogs (i.e.P = mun). In this
process, the first frog goes to the first memeplex, the second frog goes to the second memeplex, frog m goes to
the mth memeplex, and frog m+1 goes back to the first memeplex, etc. Within each memeplex, the frogs with
the best and the worst fitness are denoted by Xb and Xw. The frog with the global best fitness is identified as Xg.
Here, a process similar to PSO is applied to improve Xw (not all frogs) in each small group.
where i = 1, 2, ..., Ngen; D is the movement of a frog whereas Dmax represents the maximum
permissible movement of a frog in feasible domain; Ngen is maximum generation of evolution in each
subset. The old frog is replaced if the evolution produces the better solution else Xb is replaced by
Xg(optimal solution). If no improvement is observed then a random frog is generated and replaces the old
frog. This process of evolution continues till the termination criterion met. Shuffling Process The frogs
are again shuffled and sorted to complete the round of evolution. Again follow the same four steps until the
termination condition met.
Like all heuristics, parameter selection is critical to SFLA performance. SFLA has five parameters: the number
m of memeplexes, the number n of frogs in a memeplex, the number q of frogs in a submemeplex, the number N
of evolution or infection steps in a memeplex between two successive shufflings and the maximum step size
Smax allowed during an evolutionary step. At this point in the development of this meta-heuristic, no clear
theoretical basis is available to dictate parameter value selection. Based on experience, the sample size F (the
number m of memeplexes multiplied by the number n of frogs in each memeplex), in general, is the most
important parameter. An appropriate value for F is related to the complexity of the problem. The probability of
locating the global (or near-global) optima increases with increasing sample size. However, as the sample size
increases, the number of function evaluations to reach the goal increases, hence making it more computationally
burdensome. In addition, when selecting m, it is important to make sure that n is not too small. If there are too
few frogs in each memeplex, the advantage of the local memetic evolution strategy is lost [16]. The response of
the algorithm performance to q is that, when too few frogs are selected in a submemeplex, the information
exchange is slow, resulting in longer solution times. On the other hand, when too many frogs are selected, the
frogs are infected by unwanted ideas that trigger the censorship phenomenon that tends to lengthen the search
period. The fourth parameter, N, can take any value greater than 1. If N is small, the memeplexes will be
shuffled frequently, reducing idea exchange on the local scale. On the other hand, if N is large, each memeplex
will be shrunk into a local optimum. The fifth parameter, Smax, is the maximum step size allowed to be adopted
by a frog after being infected. Smax actually serves as a constraint to control the SFLA’s global exploration
ability. Setting Smax to a small value reduces the global exploration ability making the algorithm tend to be a
local search. On the other hand, a large Smax may result in missing the actual optima, as it is not fine-tuned.
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 358
Although, at present, there is no guidance to select proper values for the parameters, experimental results
presented in the next section provide a better understanding of the parameters’ impacts.
(viii). Cuckoo search algorithm
Cuckoo search is an optimization algorithm developed by Xin-she Yang and Suash Deb in 2009. It was
inspired by the obligate brood parasitism of some cuckoo species by laying their eggs in the nests of other host
birds (of other species). Some host birds can engage direct conflict with the intruding cuckoos [17]. For example,
if a host bird discovers the eggs are not their own, it will either throw these alien eggs away or simply abandon
its nest and build a new nest elsewhere. Some cuckoo species such as the New World brood-
parasitic Tapera have evolved in such a way that female parasitic cuckoos are often very specialized in the
mimicry in colors and pattern of the eggs of a few chosen host species Cuckoo search idealized such breeding
behavior, and thus can be applied for various optimization problems.
Each egg in a nest represents a solution, and a cuckoo egg represents a new solution. The aim is to use the new
and potentially better solutions (cuckoos) to replace a not-so-good solution in the nests. In the simplest form,
each nest has one egg. The algorithm can be extended to more complicated cases in which each nest has multiple
eggs representing a set of solutions.
CS is based on three idealized rules:
Each cuckoo lays one egg at a time, and dumps its egg in a randomly chosen nest; The best nests with high
quality of eggs will carry over to the next generation; The number of available hosts nests is fixed, and the egg
laid by a cuckoo is discovered by the host bird with a probability {\displaystyle p_{a}\in (0,1)}. Discovering
operate on some set of worst nests, and discovered solutions dumped from farther calculations [18]. In addition,
Yang and Deb discovered that the random-walk style search is better performed by Lévy flights rather than
simple random walk.
Algorithm, The pseudo-code can be summarized as:
Objective function:
{\displaystyle f(\mathbf {x} ),\quad \mathbf {x} =(x_{1},x_{2},\dots ,x_{d});\,}
Generate an initial population of
{\displaystyle n}
host nests;
While (t<MaxGeneration) or (stop criterion)
Get a cuckoo randomly (say, i) and replace its solution by performing Lévy flights;
Evaluate its quality/fitness
{\displaystyle F_{i}}
[For maximization,
{\displaystyle F_{i}\propto f(\mathbf {x} _{i})}
];
Choose a nest among n (say, j) randomly;
if (
{\displaystyle F_{i}>F_{j}}
),
Replace j by the new solution;
end if
A fraction (
{\displaystyle p_{a}}
) of the worse nests are abandoned and new ones are built;
Keep the best solutions/nests;
Rank the solutions/nests and find the current best;
Pass the current best solutions to the next generation;
end while
An important advantage of this algorithm is its simplicity. In fact, comparing with other population- or agent-
based metaheuristic algorithms such as particle swarm optimization and harmony search, there is essentially only
a single parameter {\displaystyle p_{a}} in CS (apart from the population size {\displaystyle n}). Therefore, it is
very easy to implement.
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 359
(ix). Fish flow optimisation algorithm
In the development of the FSOA, the following characteristics are considered (Li et al. 2002; Madeiro,
2010): (i) each fish represents a candidate solution of the optimization problem; (ii) food density is related to an
objective function to be optimized (in an optimization problem, the amount of food in a region is inversely
proportional to value of objective function); and (iii) the aquarium is the design space where the fish can be
found [19].
As noted earlier, the fish weight at the swarm represents the accumulation of food (e.g., the objective function)
received during the evolutionary process. In this case, the weight is an indicator of success (Li et al. 2002;
Madeiro, 2010). Basically, the FSOA presents four operators that can be classified as "search" and "movement".
Details on each of these operators are shown next.
Individual Movement Operator
This operator contributes to the individual and collective movements of fishes in the swarm. Each fish updates
its new position by using the Equation (1):
where xi is the final position of fish i at current generation, rand is a random generator and sind is a weighted
parameter.
Food Operator
The weight of each fish is a metaphor used to measure the success of food search. The higher the weight of a
fish, the more likely this fish be in a potentially interesting region in design space. According to Madeiro (2010),
the amount of food that a fish eats depends on the improvement in its objective function in the current
generation. The weight is updated according to Equation (2):
where Wit is the fish weight i at generation t and ∆fi is the difference of the objective function between the
current position and the new position of fish i. It is important to emphasize that ∆fi=0 for the fishes in same
position.
Instinctive collective movement operator
This operator is important for the individual movement of fishes when ∆fi≠0. Thus, only the fishes whose
individual execution of the movement resulted in improvement of their fitness will influence the direction of
motion of the school, resulting in instinctive collective movement. In this case, the resulting direction ( ),
calculated using the contribution of the directions taken by the fish, and the new position of the ith fish are given
by:
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 360
It is important to emphasize that in the application of this operator, the direction chosen by a fish that located the
largest portion of food to exert the greatest influence on the swarm [20]. Therefore, the instinctive collective
movement operator tends to guide the swarm in the direction of motion chosen by fish who found the largest
portion of food in it individual movement.
(x). Bacterial Foraging Optimization Algorithm
Bacterial Foraging Optimization Algorithm (BFOA) is proposed by Kevin Passino (2002), is a new
comer to the family of nature inspired optimization algorithms. Application of group foraging strategy of a
swarm of E.coli bacteria in multi-optimal function optimization is the key idea of this new algorithm. Bacteria
search for nutrients is a manner to maximize energy obtained per unit time [21]. Individual bacterium also
communicates with others by sending signals. A bacterium takes foraging decisions after considering two
previous factors. The process, in which a bacterium moves by taking small steps while searching for nutrients, is
called chemotaxis. The key idea of BFOA is mimicking chemotactic movement of virtual bacteria in the problem
search space.
p : Dimension of the search space,
S : Total number of bacteria in the population,
Nc : The number of chemotactic steps,
Ns : The swimming length.
Nre : The number of reproduction steps,
Ned : The number of elimination-dispersal events,
Ped : Elimination-dispersal probability,
C(i): The size of the step taken in the random direction specified by the tumble.
Foraging theory is based on the assumption that animals search for and obtain nutrients in a way that
maximizes their energy intake E per unit 65 time T spent foraging. Hence, they try to maximize a function like
E/T (or they maximize their long-term average rate of energy intake). Maximization of such a function provides
nutrient sources to survive and additional time for other important activities (e.g., fighting, fleeing, mating,
reproducing, sleeping, or shelter building). Shelter building and mate finding activities sometimes bear
similarities to foraging. Clearly, foraging is very different for different species. Herbivores generally find food
easily but must eat a lot of it. Carnivores generally find it difficult to locate food but do not have to eat as much
since their food is of high energy value. The “environment” establishes the pattern of nutrients that are available
(e.g., via what other organisms are nutrients available, geological constraints such as rivers and mountains and
weather patterns) and it places constraints on obtaining that food (e.g., small portions of food may be separated
by large distances). During foraging there can be risks due to predators, the prey may be mobile so it must be
chased and the physiological characteristics of the forager constrain its capabilities and ultimate success.
Bacterial Foraging optimization theory is explained by following steps.
x Chemotaxis
x Swarming
x Reproduction and
x Eliminational-Dispersal 5.2.1 Chemotaxis
This process simulates the movement of an E.coli cell through swimming and tumbling via flagella.
Biologically an E.coli bacterium can move in two different ways. It can swim for a period of time in the same
direction or it may tumble and alternate between these two modes of operation for the entire lifetime [22].
Suppose T i ( j, k, l) represents i th bacterium at 66 j th chemotactic, k th reproductive and l th elimination-
dispersal step. C(i) is the size of the step taken in the random direction specified by the tumble (run length unit).
Taxonomy: The Bacterial Foraging Optimization Algorithm belongs to the field of Bacteria Optimization
Algorithms and Swarm Optimization, and more broadly to the fields of Computational Intelligence and
Metaheuristics. It is related to other Bacteria Optimization Algorithms such as the Bacteria Chemotaxis
Algorithm [Muller2002], and other Swarm Intelligence algorithms such as Ant Colony Optimization and Particle
Swarm Optimization.
Inspiration: The Bacterial Foraging Optimization Algorithm is inspired by the group foraging behavior of
bacteria such as E.coli and M.xanthus. Specifically, the BFOA is inspired by the chemotaxis behavior of bacteria
that will perceive chemical gradients in the environment (such as nutrients) and move toward or away from
specific signals.
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 361
Metaphor: Bacteria perceive the direction to food based on the gradients of chemicals in their environment.
Similarly, bacteria secrete attracting and repelling chemicals into the environment and can perceive each other in
a similar way. Bacterial cells are treated like agents in an environment, using their perception of food and other
cells as motivation to move, and stochastic tumbling and swimming like movement to re-locate. Depending on
the cell-cell interactions, cells may swarm a food source, and/or may aggressively repel or ignore each other.
Strategy: The information processing strategy of the algorithm is to allow cells to stochastically and collectively
swarm toward optima. This is achieved through a series of three processes on a population of simulated cells: 1)
'Chemotaxis' where the cost of cells is derated by the proximity to other cells and cells move along the
manipulated cost surface one at a time (the majority of the work of the algorithm), 2) 'Reproduction' where only
those cells that performed well over their lifetime may contribute to the next generation, and 3) 'Elimination-
dispersal' where cells are discarded and new random samples are inserted with a low probability. The algorithm
was designed for application to continuous function optimization problem domains. Given the loops in the
algorithm, it can be configured numerous ways to elicit different search behavior. It is common to have a large
number of chemotaxis iterations, and small numbers of the other iterations.
(xi). Flower Pollination Algorithm
The flower pollination algorithm (FPA) is a novel optimization technique derived from the pollination
behavior of flowers. However, the shortcomings of the FPA, such as a tendency towards premature convergence
and poor exploitation ability, confine its application in engineering problems. To further strengthen FPA
optimization performance, an orthogonal learning (OL) strategy based on orthogonal experiment design (OED)
is embedded into the local pollination operator [23]. OED can predict the optimal factor level combination by
constructing a smaller but representative test set based on an orthogonal array. Using this characteristic of OED,
the OL strategy can extract a promising solution from various sources of experience information, which leads the
population to a potentially reasonable search direction. Moreover, the catfish effect mechanism is introduced to
focus on the worst individuals during the iteration process. This mechanism explores new valuable information
and maintains superior population diversity [24]. The experimental results on benchmark functions show that our
proposed algorithm significantly enhances the performance of the basic FPA and offers stronger competitiveness
than several state-of-the-art algorithms.
Flower Pollination Algorithm Flower pollination algorithm (FPA) is swarm intelligence optimization algorithm
proposed by to simulate flower pollination. The dynamic control on the process of global search and local search
is realized by adjusting parameter P. Flower pollination process is achieved through cross-pollination or self-
pollination in the nature. The position of the pollinator is random or similar to random in the process of
pollination. In order to simulate the way of flower pollination, the following four rules are set. Rule 1: The biotic
and cross-pollination can be recognized as a global pollination, where the pollinators follow the Levy
distribution. Rule 2: The abiotic and self-pollination can be interpreted as a local pollination. Rule 3: The flower
constancy property can be considered as a reproduction ratio that is proportional to the degree of similarity
between two flowers. Rule 4: Due to the physical proximity and wind, local pollination has a slight advantage
over global pollination. Both are controlled by the value of the variable P [0,1] . In the global pollination, the
fittest reproduction is ensured through insects that can travel for long distances [25]. If the fittest is represented
as g * , the flower constancy and the first rule can be mathematically formulated as follows: 1 ( ) t t t i i i x x L g
x (1) where, t i x is a solution vector at iteration t , g * is the best found solution at iteration t ,
represents the step size scaling factor, and L is the pollination strength or the step size. The insect’s long moves
can be mimicked using Levy flight. For this reason, the step size L is derived from the Levy distribution. 1 ( )sin(
) 1 2 L s (2) where, 1.5 and represents the typical Gamma function. The local pollination
based on Rule 2 can be formulated as follows: 1 ( ) t t t t i i j k x x x x (3) where, t j x and tk x are
pollens (solution vectors) that are transferred from different flowers, but these flowers belong to a single plant
species. It simulates the flower constancy in a small neighborhood. The variable is derived from a uniform
distribution in the range [0,1] . The pollination process can be either local or global, so a switch probability P is
presented to switch between the two types of pollination (Rule 4).
(xii). Genetic Algorithm
Genetic Algorithms (GAs) are adaptive heuristic search algorithms that belong to the larger part of
evolutionary algorithms. Genetic algorithms are based on the ideas of natural selection and genetics [26]. These
are intelligent exploitation of random search provided with historical data to direct the search into the region of
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 362
better performance in solution space. They are commonly used to generate high-quality solutions for
optimization problems and search problems.
Genetic algorithms simulate the process of natural selection which means those species who can adapt to
changes in their environment are able to survive and reproduce and go to next generation. In simple words, they
simulate “survival of the fittest” among individual of consecutive generation for solving a problem. Each
generation consist of a population of individuals and each individual represents a point in search space and
possible solution. Each individual is represented as a string of character/integer/float/bits. This string is
analogous to the Chromosome
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process
of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are
commonly used to generate high-quality solutions to optimization and search problems by relying on bio-
inspired operators such as mutation, crossover and selection [27]. Problems which appear to be particularly
appropriate for solution by genetic algorithms include timetabling and scheduling problems, and many
scheduling software packages are based on GAs. It has also been applied to engineering. Genetic algorithms are
often applied as an approach to solve global optimization problems.
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness
landscape as mixing, i.e., mutation in combination with crossover, is designed to move the population away
from local optima that a traditional hill climbing algorithm might get stuck in. Observe that commonly used
crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall
genetic algorithm process.
(xiii). Particle swarm optimization
Particle swarm optimization (PSO) is a computational method that optimizes a problem
by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a
problem by having a population of candidate solutions, here dubbed particles, and moving these particles around
in the search-space according to simple mathematical formulae over the particle's position and velocity [28].
Each particle's movement is influenced by its local best known position, but is also guided toward the best
known positions in the search-space, which are updated as better positions are found by other particles. This is
expected to move the swarm toward the best solutions
PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search
very large spaces of candidate solutions. However, metaheuristics such as PSO do not guarantee an optimal
solution is ever found. Also, PSO does not use the gradient of the problem being optimized, which means PSO
does not require that the optimization problem be differentiable as is required by classic optimization methods
such as gradient descent and quasi-newton methods.
Formally, let f: ℝn → ℝ be the cost function which must be minimized. The function takes a candidate solution
as an argument in the form of a vector of real numbers and produces a real number as output which indicates the
objective function value of the given candidate solution [29]. The gradient of f is not known. The goal is to find a
solution a for which f(a) ≤ f(b) for all b in the search-space, which would mean a is the global minimum.
Let S be the number of particles in the swarm, each having a position xi ∈ ℝn in the search-space and a
velocity vi ∈ ℝn. Let pi be the best known position of particle i and let g be the best known position of the entire
swarm. A basic PSO algorithm is then:
for each particle i = 1, ..., S do
Initialize the particle's position with a uniformly distributed random vector: xi ~ U(blo, bup)
Initialize the particle's best known position to its initial position: pi ← xi
if f(pi) < f(g) then
update the swarm's best known position: g ← pi
Initialize the particle's velocity: vi ~ U(-|bup-blo|, |bup-blo|)
while a termination criterion is not met do:
for each particle i = 1, ..., S do
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 363
for each dimension d = 1, ..., n do
Pick random numbers: rp, rg ~ U(0,1)
Update the particle's velocity: vi,d ← ω vi,d + φp rp (pi,d-xi,d) + φg rg (gd-xi,d)
Update the particle's position: xi ← xi + vi
if f(xi) < f(pi) then
Update the particle's best known position: pi ← xi
if f(pi) < f(g) then
Update the swarm's best known position: g ← pi
The values blo and bup represents the lower and upper boundaries of the search-space. The termination criterion
can be the number of iterations performed, or a solution where the adequate objective function value is
found. The parameters ω, φp, and φg are selected by the practitioner and control the behaviour and efficacy of the
PSO method.
Table 1. List of Nature Inspired Algorithms with input parameters, evolutionary strategies and
applications [31]
S.No. Algorithm Input parameters Evolutionary
Mechanism
Applied Application Area
1.
Genetic Algorithm
(GA)
Crossover rate and
Mutation rate.
Selection,
Recombination and
Mutation
Machine Learning , Code
Breaking, Computer automated
design , Computer architecture,
Bayesian inference, forensic
science, Data Center/Server Farm,
File allocation for a distributed
system, game theory, robot
behavior etc.
2.
Ant Colony
Optimization (ACO)
τ(e) is pheromone update
of a path e (edge), is
evaporation rate
τ(e)= τ τ The generalized assignment
problem (GAP), and the set
covering problem (SCP) ,
Classification, AntNet for network
routing applications, Multiple
Knapsack Problem.
3.
Particle Swarm
Optimization (PSO)
Learning factors C1 and
C2, inertia weight w,
maximum change of a
particle velocity Vmax,
Pg is position of best
particle (g), Xi is the
current position of
particle (i), Pi is best
position of particle in
previous cycle.
New Velocity Vi = w *
current Vi + C1*rand()
* (Pi – Xi) + C2 *
Rand() * (Pg-Xi)
New position Xi =
current position Xi +
New Vi;
Vmax >=Vi >=Vmax
conjunction with a back
propagation algorithm, to train a
neural network system design,
multi-objective optimization,
classification, pattern recognition
and image processing , image
clustering, robotic applications,
decision making, simulation and
identification, time-frequency
analysis, image segmentation etc.
4.
Bacterial Foraging
Algorithm (BFOA)
Step size Z(i), position
vector of ith of bacterium
O(i) at jth chemotactic step
and kth reproductive step,
Δ indicates a vector in the
random direction whose
elements lie in [-1, 1].
O(i,j+1,k) = O(i,j,k) +
Z(i)
Job Scheduling, machine learning,
to train a Wavelet-based Neural
Network (WNN), face recognition,
Image Edge detection, Image
Segmentation, Color Image
Quantization
5.
Shuffled frog
Leaping Algorithm
(SFLA)
Number of memeplexes,
number of memeplex
iterations and maximum
change in position Dmax
Change in frog position
(Di) = rand () *(Xb-
Xw)
New Position Xw = Xw
+ Di (Dmax >= Di >=-
Dmax)
multi-user detection in DS-CDMA
Communication System,
multivariable PID controllers and
web document classification,
image watermarking, clustering
6.
Artificial Bee
Colony Algorithm
(ABC)
Φij, a uniformly
distributed real random
number in the range [-1,
1]. i,j is food source
New position
POSij = Xij + Φij(Xij -
Xkj)
train neural networks, medical
pattern classification and clustering
problems, solving TSP, leaf-
constrained minimum spanning
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 364
having different values
and j is dimension
tree, network reconfiguration
problem in a radial distribution
system
7.
Firefly Algorithm
(FFA)
β is variation of
attractiveness, r is
distance between two files
xi and xj. αt is step size, γ
is the light absorption
coefficient, ∈ is a random
variable
= + βexp[- γ ] ( - ) + αt∈ Digital Image Compression and
Image Processing, Feature
selection and fault detection, trail
neural network, Semantic Web
Composition, Classification and
Clustering, Rigid Image
Registration Problems, Parameter
Optimization of SVM
8.
Cuckoo Search
Algorithm (CSA)
Xi(t+1) is new solution
(nest) of ith cuckoo, ⊕ entry wise multiplication
of α step size and levy
flight
Xi(t+1)=Xi(t)+α⊕Lēvy(
λ)
Spring design optimization,
Welded Beam Design, software
testing and data generation,
wireless sensor network, Knapsack
problems, train neural network
19.
Bat Algorithm (BA) Fi is frequency of ith bat,
Vi is velocity and Xi is
position vector of bat, β ia
random vector between
(0,1)
Fi = Fmin + (Fmax – Fmin)β
= + X* ) Fi i
Continuous optimization,
classification, clustering and data
mining, inverse problem and
parameter estimation,
combinatorial optimization and
scheduling, image processing,
fuzzy logic and other applications
10.
Flower Pollination
Algorithm (FPA)
is the pollen i at tth
iteration, and are pollens
of different flowers, ∈ is
random number over [-
1,1]
= + ∈ ( ) Pattern recognition, Design of disc
brake
III. CONCLUSION
Nature inspired metaheuristic pollination algorithms are commenced a new era in Computer Science and
motivated from natural ecosystem and simulate the behavior of bio insects and creatures. The main aim of this
paper was to familiar with these types of bio inspired algorithms for optimisation of objects. Hence, this paper
also helps researcher or practitioner to gain insight of the various toolboxes available for simulating nature
inspired algorithms over benchmark problems. Overview of around 12 numbers of nature inspired pollination
algorithms are described in this paper for research community.
IV. REFERENCES
1. A. Kazharov, V. Kureichik, 2010. "Ant colony optimization algorithms for solving transportation
problems", Journal of Computer and Systems Sciences International, Vol. 49. No. 1. pp. 30–43.
2. C. Blum, 2005 "Ant colony optimization: Introduction and recent trends". Physics of Life Reviews, 2:
353-373
3. Tsai P.-W., Khan M.K., Pan J.-S., Liao B.-Y. Interactive artificial bee colony supported passive
continuous authentication system. IEEE Syst. J. 2012 doi: 10.1109/JSYST.2012.2208153.
4. Curkovic P., Jerbic B. Honey-bees optimization algorithm applied to path planning problem. Int. J.
Simul. Model. 2007;6:154–164. doi: 10.2507/IJSIMM06(3)2.087
5. Mohammed Alauddin, “Mosquito flying optimization (MFO) in “International Conference on
Electrical, Electronics, and Optimization Techniques (ICEEOT)”, 2016,
DOI:10.1109/ICEEOT.2016.7754783.
6. Yan-Jang S Huang, Stephen Higgs, Dana L Vanlandingham, “Flavivirus-Mosquito Interactions”,
Published in Viruses 2014, DOI:10.3390/v6114703
7. Md. Alauddin, “Mosquito flying optimization (MFO) in 2016 International Conference on Electrical,
Electronics, and Optimization Techniques (ICEEOT), IEEE Xplore: 24 November 2016,
DOI: 10.1109/ICEEOT.2016.7754783.
8. Mirjalili, Seyedali. "Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm."
Knowledge-Based Systems 89 (2015): 228-249.
9. Gaston, Kevin J., et al. "The ecological impacts of nighttime light pollution: a mechanistic appraisal."
Biological reviews 88.4 (2013): 912-927.
10. Yang, X. S. (2010). "A New Metaheuristic Bat-Inspired Algorithm, in: Nature Inspired Cooperative
Strategies for Optimization (NISCO 2010)". Studies in Computational Intelligence. 284: 65–74.
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 365
11. Tsai, P. W.; Pan, J. S.; Liao, B. Y.; Tsai, M. J.; Istanda, V. (2012). "Bat algorithm inspired algorithm for
solving numerical optimization problems". Applied Mechanics and Materials. 148-149: 134–137.
12. Szymon Łukasik, Sławomir Żak, “Firefly Algorithm for Continuous Constrained Optimization Tasks”,
ICCCI 2009: Computational Collective Intelligence. Semantic Web, Social Networks and Multiagent
Systems, Springer, pp 97-106.
13. Nur Farahlina Johari, Azlan Mohd Zain, Noorfa Mustaffa, Amirmudin Udin, “Firefly Algorithm for
Optimization Problem”, April 2013, Applied Mechanics and Materials 421,
DOI: 10.4028/www.scientific.net/AMM.421.512.
14. Amarita Ritthipakdee, Arit Thammano, Nol Premasathian, DuangjaiJitkongchuen, “Firefly Mating
Algorithm for Continuous Optimization Problems”, Computational Intelligence and Neuroscience,
Vol 2017, doi.org/10.1155/2017/8034573.
15. Muzaffar Eusuff, Kevin Lansey, Fayzul Pasha, “Shuffled frog-leaping algorithm: a memetic meta-
heuristic for discrete optimization”, Engineering Optimization Journal, Volume 38, 2006 - Issue 2,
doi.org/10.1080/03052150500384759.
16. ZhaoLipinga, WangWeiweib, HanYib, XuYefengb, ChenYixianb, “Application of Shuffled Frog
Leaping Algorithm to an Uncapacitated SLLS Problem”, Elsevier AASRI Procedia, Vol 1, 2012, Pages
226-231, doi.org/10.1016/j.aasri.2012.06.035.
17. A.S. Joshi, Omkar Kulkarni, Ganesh Kakandika, Vilas madhaorao Nandedkar, “Cuckoo Search
Optimization- A Review”, Materials today proceedings 4(8):7262-7269, Jan 2017,
DOI: 10.1016/j.matpr.2017.07.055.
18. M.Mareli, B.Twala, “An adaptive Cuckoo search algorithm for optimisation”, Applied Computing and
Informatics, Vol 14, Issue 2, July 2018, Pages 107-115, doi.org/10.1016/j.aci.2017.09.001.
19. Mehdi Neshat, Ali Adeli, Ghodrat Sepidnam, Mahdi Sargolzaei, Adel Nadjaran Toosi, “A Review of
Artificial Fish Swarm Optimization Methods and Applications”, International Journal on Smart Sensing
and Intelligent Systems 5(1):107-148, March 2012, DOI:10.21307/ijssis-2017-474.
20. Fran Sérgio Lobato, Valder Steffen, Jr, “Fish Swarm Optimization Algorithm Applied to Engineering
System Design”, Latin American Journal of Solids and Structures 11(1):143-156, Jan 2014,
DOI: 10.1590/S1679-78252014000100009.
21. Swagatam Das, Arijit Biswas, Sambarta Dasgupta, Ajith Abraham, “Bacterial Foraging Optimization
Algorithm: Theoretical Foundations, Analysis, and Applications”, Foundations of Computational
Intelligence, Springer, Volume 3 pp 23-55.
22. Hanning Chen, Yunlong Zhu, and Kunyuan Hu, “Adaptive Bacterial Foraging Optimization”, Abstract
and Applied Analysis, Volume 2011, Article ID 108269, 27 pages
http://dx.doi.org/10.1155/2011/108269.
23. Xin-She Yang, “Flower Pollination Algorithm for Global Optimization”, Published in UCNC 2012,
DOI:10.1007/978-3-642-32894-7_27.
24. Xin-She Yang, “Flower Pollination Algorithm for Global Optimization”, UCNC 2012: Unconventional
Computation and Natural Computation pp 240-249
25. Xiao-Xu Ma, and Jie-Sheng Wang, “An Improved Flower Pollination Algorithm to Solve Function
Optimization Problem”, IAENG International Journal of Computer Science, 45:3, IJCS_45_3_01.
26. Mitchell, Melanie (1996). An Introduction to Genetic Algorithms. Cambridge, MA: MIT
Press. ISBN 9780585030944.
27. Whitley, Darrell (1994). "A genetic algorithm tutorial" (PDF). Statistics and Computing. 4 (2): 65–
85. CiteSeerX 10.1.1.184.3999. doi:10.1007/BF00175354.
28. Kennedy, J.; Eberhart, R. (1995). "Particle Swarm Optimization". Proceedings of IEEE International
Conference on Neural Networks. IV. pp. 1942–1948. doi:10.1109/ICNN.1995.488968.
29. Shi, Y.; Eberhart, R.C. (1998). "A modified particle swarm optimizer". Proceedings of IEEE
International Conference on Evolutionary Computation. pp. 69–73.
30. Parul Agarwal, Shikha Mehta, “Nature-Inspired Algorithms: State-of-Art, Problems and Prospects”,
International Journal of Computer Applications (0975 – 8887) Volume 100 – No.14, August 2014.
Compliance Engineering Journal
Volume 10, Issue 8, 2019
ISSN NO: 0898-3577
Page No: 366