Nature-Inspired Optimization Algorithms || Genetic Algorithms

Download Nature-Inspired Optimization Algorithms || Genetic Algorithms

Post on 19-Feb-2017

216 views

Category:

Documents

2 download

Embed Size (px)

TRANSCRIPT

  • 5 Genetic Algorithms

    Genetic algorithms are among the most popular evolutionary algorithms in terms of thediversity of their applications. A vast majority of well-known optimization problemshave been tried by genetic algorithms. In addition, genetic algorithms are population-based, and many modern evolutionary algorithms are directly based on genetic algo-rithms or have some strong similarities.

    5.1 Introduction

    The genetic algorithm (GA), developed by John Holland and his collaborators in the1960s and 1970s [11,4], is a model or abstraction of biological evolution based onCharles Darwins theory of natural selection. Holland was probably the first to use thecrossover and recombination, mutation, and selection in the study of adaptive and artifi-cial systems. These genetic operators form the essential part of the genetic algorithm asa problem-solving strategy. Since then, many variants of genetic algorithms have beendeveloped and applied to a wide range of optimization problems, from graph coloringto pattern recognition, from discrete systems (such as the traveling salesman problem)to continuous systems (e.g., the efficient design of airfoil in aerospace engineering),and from financial markets to multiobjective engineering optimization.

    There are many advantages of genetic algorithms over traditional optimization algo-rithms. Two of the most notable are. the ability to deal with complex problems andparallelism. Genetic algorithms can deal with various types of optimization, whetherthe objective (fitness) function is stationary or nonstationary (changes with time), lin-ear or nonlinear, continuous or discontinuous, or with random noise. Because multipleoffsprings in a population act like independent agents, the population (or any subgroup)can explore the search space in many directions simultaneously. This feature makes itideal to parallelize the algorithms for implementation. Different parameters and evendifferent groups of encoded strings can be manipulated at the same time.

    However, genetic algorithms also have some disadvantages. The formulation of afitness function, the use of population size, the choice of important parameters suchas the rate of mutation and crossover, and the selection criteria of the new popula-tion should be carried out carefully. Any inappropriate choice will make it difficultfor the algorithm to converge or it will simply produce meaningless results. Despitethese drawbacks, genetic algorithms remain one of the most widely used optimizationalgorithms in modern nonlinear optimization.

    Nature-Inspired Optimization Algorithms. http://dx.doi.org/10.1016/B978-0-12-416743-8.00005-1 2014 Elsevier Inc. All rights reserved.

    http://dx.doi.org/10.1016/B978-0-12-416743-8.00005-1

  • 78 Nature-Inspired Optimization Algorithms

    5.2 Genetic Algorithms

    The essence of GA involves the encoding of an optimization function as arrays of bitsor character strings to represent chromosomes, the manipulation operations of stringsby genetic operators, and the selection according to their fitness, with the aim to find agood (even optimal) solution to the problem concerned.

    This is often done by the following procedure: (1) encoding the objectives or costfunctions; (2) defining a fitness function or selection criterion; (3) creating a popula-tion of individuals; (4) carrying out the evolution cycle or iterations by evaluating thefitness of all the individuals in the population, creating a new population by performingcrossover and mutation, fitness-proportionate reproduction, etc., and replacing the oldpopulation and iterating again using the new population; (5) decoding the results toobtain the solution to the problem. These steps can be represented schematically as thepseudo code of genetic algorithms shown in Figure 5.1.

    One iteration of creating a new population is called a generation. The fixed-lengthcharacter strings are used in most genetic algorithms during each generation, althoughthere is substantial research on the variable-length strings and coding structures. Thecoding of the objective function is usually in the form of binary arrays or real-valuedarrays in the adaptive genetic algorithms. For simplicity, we use binary strings forencoding and decoding in our discussion. The genetic operators include crossover,mutation, and selection from the population.

    The crossover of two parent strings is the main operator with a higher probability pcand is carried out by swapping one segment of one chromosomewith the correspondingsegment on another chromosome at a random position (see Figure 5.2). The crossovercarried out in this way is a single-point crossover. Crossover can also occur at multiplesites, which essentially swap the multiple segments with those on their correspondingchromosome. Crossover at multiple points is used more often in genetic algorithms toincrease the evolutionary efficiency of the algorithms.

    The mutation operation is achieved by flopping the randomly selected bits (seeFigure 5.3), and the mutation probability pm is usually small. In addition, mutation can

    Genetic Algorithm

    Objective function f(x), x = (x1, ..., xd)T

    Encode the solutions into chromosomes (strings)Define fitness F (eg, F f(x) for maximization)Generate the initial populationInitialize the probabilities of crossover (pc) and mutation (pm)

    while ( t

  • Genetic Algorithms 79

    1 0 1 1 0 1 0 1

    0 1 1 0 0 0 1 1

    Parent gene pair (before crossover)

    1 0 1 1 0 0 1 1

    0 1 1 0 0 1 0 1

    Child gene pair (after crossover)

    Figure 5.2 Diagram of crossover at a random crossover point (location) in genetic algorithms.

    1 1 0 1 1 0 0 1Original gene (before mutation)

    1 1 0 0 1 0 0 1New gene (after crossover)

    Figure 5.3 Schematic representation of mutation at a single site by flipping a randomly selectedbit (1 0).

    also occur at multiple sites simultaneously, which can be advantages in practice andimplementations.

    The selection of an individual in a population is carried out by the evaluation of itsfitness, and it can remain in the new generation if a certain threshold of the fitness isreached. In addition, selection can also be fitness-based so that the reproduction of apopulation is fitness-proportionate. That is to say, the individuals with higher fitnessare more likely to reproduce.

    5.3 Role of Genetic Operators

    As introduced earlier, genetic algorithms have three main genetic operators: crossover,mutation, and selection. Their roles can be very different.

    Crossover. Swaping parts of the solution with another in chromosomes or solutionrepresentations. The main role is to provide mixing of the solutions and convergencein a subspace.

    Mutation. The change of parts of one solution randomly, which increases the diversityof the population and provides a mechanism for escaping from a local optimum.

    Selection of the fittest, or elitism. The use of the solutions with high fitness to passon to next generations, which is often carried out in terms of some form of selectionof the best solutions.

    Obviously, in actual algorithms, the interactions between these genetic operatorsmake behavior very complex. However, the role of the individual components remainsthe same.

  • 80 Nature-Inspired Optimization Algorithms

    Crossover is mainly an action with a subspace. This point becomes clear for a binarysystem where the strings consist of a and b. For example, for two strings S1 = [aabb]and S2 = [abaa], whatever the crossover actionswill be, their offspringswill always bein the form [a. . .]. That is, crossover can only result in solutions in a subspace where thefirst component is always a. Furthermore, two identical solutionswill result in two iden-tical offspring, nomatter how the crossover has been applied. This means that crossoverworks in a subspace, and the converged solutions/states will remain converged.

    On the other hand, mutation usually leads to a solution outside the subspace. For theprevious example, if the mutation occurs on the first a and it flips to b, then the solutionS3 = [babb] does not belong to the previous subspace. In fact,mutation typically gener-ates solutions that may be further from current solutions, thus increasing the diversity ofthe population. This will enable the population to escape from a trapped local optimum.

    One important issue is random selection among the population. For example, cross-over requires two parents in the population. Do we choose them randomly or biasedtoward the solutions with better fitness? One way is to use a roulette wheel to do theselection; another is to use fitness-proportional selection. Obviously, there are otherforms of selection in use, including linear ranking selection, tournament selections,and others.

    Both crossover and mutation work without use of the knowledge of the objectiveor fitness landscape. Selection of the fittest, or elitism, on the other hand, does use thefitness landscape to guidewhat to choose and thus affects the search behavior of an algo-rithm. What is selected and how solutions are selected depend on the algorithm as wellas the objective function values. This elitism ensures that the best solutionsmust survivein the population. However, very strong elitism may lead to premature convergence.

    It is worth pointing out that these genetic operators are fundamental. Other operatorsmay take different forms, and hybrid operators can also work. However, to understandthe basic behavior of genetic algorithms, we will focus on these key operators.

    5.4 Choice of Parameters

    An important issue is the formulation or choice of an appropriate fitness function thatdetermines the selection criterion in a particular problem. For the minimization of afunction using genetic algorithms, one simple way of constructing a fitness function isto use the simplest form F = A y, where A is a large constant (though A = 0 willdo if the fitness is not required to be nonnegative) and y = f (x). Thus, the objectiveis to maximize the fitness function and subsequently minimize the objective functionf (x). Alternatively, for a minimization problem, one can define a fitness functionF = 1/ f (x), but it may have a singularity when f 0. However, there are manydifferent ways of defining a fitness function. For example, we can use the individualfitness assignment relative to the whole population,

    F(xi ) = f (i )ni=1 f (i )

    , (5.1)

    where i is the phenotypic value of individual i , and n is the population size.

  • Genetic Algorithms 81

    The appropriate form of the fitness function will make sure that the solutions withhigher fitness should be selected efficiently. Poor fitness functionmay result in incorrector meaningless solutions.

    Another important issue is the choice of various parameters. The crossover proba-bility pc is usually very high, typically in the range of 0.7 1.0. On the other hand,the mutation probability pm is usually small (usually 0.001 0.05). If pc is too small,then the crossover occurs sparsely, which is not efficient for evolution. If the mutationprobability is too high, the solutions could still jump around, even if the optimalsolution is approaching.

    A proper criterion for selecting the best solutions is also important. How to select thecurrent population so that the best individuals with higher fitness should be preservedand passed on to the next generation is a question that is still not fully answered.Selection is often carried out in association with certain elitism. The basic elitism is toselect the fittest individual (in each generation) whose traits will be carried over to thenew generation without being modified by genetic operators. This ensures that the bestsolution is achieved more quickly.

    Other issues include themultiple sites for mutation and the use of various populationsizes. Mutation at a single site is not very efficient; mutation at multiple sites willincrease evolution efficiency. However, too many mutants will make it difficult for thesystem to converge or can even make the system go astray to the wrong solutions. Inreal ecological systems, if the mutation rate is too high under a high selection pressure,the whole population might go extinct.

    In addition, the choice of the right population size n is also very important. If thepopulation size is too small, there is not enough evolution going on, and there is a riskfor the whole population to go extinct. In the real world, for a species with a smallpopulation, ecological theory suggests that there is a real danger of extinction for suchspecies. Even if the system carries on, there is still a danger of premature convergence.In a small population, if a significantly more fit individual appears too early, it mayreproduce enough offspring that they overwhelm the whole (small) population. Thiswill eventually drive the system to a local optimum (not the global optimum). Onthe other hand, if the population is too large, more evaluations of the objective func-tion are needed, which will require extensive computing time. Studies and empiricalobservations suggest that the population size n = 40 to 200 works for most problems.

    Furthermore, for more complex problems, various GA variants can be used to suitspecific tasks. Variants such as adaptive genetic algorithms and hybrid genetic algo-rithms can be found in the literature.

    Using the basic procedure described, we can implement the genetic algorithms inany programming language. From the implementation point of view, there are twomainways to use encoded strings. We can either use a string (or multiple strings) for eachindividual in a population or use a long string for all the individuals or variables, asshown in Figure 5.4.

    x1 y1 ... ... xn yn ... ...

    Figure 5.4 Encode all design variables into a single long string.

  • 82 Nature-Inspired Optimization Algorithms

    0

    2

    4

    25 10 15

    0 5 10 15

    Best estimate: x

    Fitness: f(x) 1

    Figure 5.5 Typical outputs from a typical run. The best estimate will approach ; the fitnesswill approach fmax = 1.

    The well-known Easom function

    f (x) = cos (x)e(x)2 , x [10, 10], (5.2)has the global maximum fmax = 1 at x = . The outputs from a typical run are shownin Figure 5.5, where the top figure shows the variations of the best estimates as theyapproach to x , and the lower figure shows the variations of the fitness function.

    Genetic algorithms are widely used, and there are many software packages in almostall programming languages. Efficient implementation can be found in both commercialand free codes. For this reason, we do not provide any implementation here.

    5.5 GA Variants

    There are several dozen GA variants which collectively can be called genetic algo-rithms. Numerous variants are based on the various modifications in basic geneticoperators.

    For the crossover operator, in addition to the multisite or N-site crossover and uni-formcross, shuffled crossover uses the randompermutations for twoparents atN-points,and then shuffled offsprings are transformed back via an inverse permutation. Simi-larly, mutation can be achieved by flipping randomly chosen bits; by bitwise inver...

Recommended

View more >