comparing firefly algorithm, imperialist competitive ... comparing firefly algorithm, imperialist...

Download COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ... COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE

If you can't read please download the document

Post on 09-Jun-2020

0 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • 722

    GNGTS 2018 SeSSione 3.3

    COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ALGORITHM AND QUANTUM PARTICLE SWARM OPTIMIZATION ON ANALYTIC OBJECTIVE FUNCTIONS AND RESIDUAL STATICS CORRECTIONS M. Aleardi, S. Pierini Earth Sciences Department, University of Pisa, Italy

    Introduction. The solution of non-linear geophysical inverse problems presents several challenges mainly related to the possibility of convergence toward a local minimum of the objective function. For this reason, global optimization methods are often preferred over linearized approaches especially in case of objective functions with complex topology (i.e. many local minima). In particular, over the last decade the continuously increasing power of modern parallel architectures has considerably encouraged the application of these global methods to solve many geophysical optimization problems (Sen and Stoffa, 2013). Since the ‘70s, tens of global algorithms have been implemented (for a synthetic list see Hosseini and Al Khaled 2014). However, many of these methods have found applications in engineering optimizations (i.e. computer engineering, industrial engineering, mechanical engineering) and only a small subset of them has been employed to tackle geophysical exploration problems. In this context, genetic algorithms, simulated annealing and particle swarm are undoubtedly the most popular (Sajeva et al., 2017). Notably, the performances of these global methods are different for different optimization problems and strongly depend on the shape of the objective function and on the model space dimension (i.e. the number of unknowns). For this reason, in this work we are interested in comparing the performances of three global optimization algorithms: The Firefly Algorithm (FA), the Imperialist Competitive Algorithm (ICA) and the Quantum Particle Swarm Optimization (QPSO). These methods have been introduced in the last few years and have found very limited popularity in the geophysical community so far. In particular, as the authors are aware of, there are no applications of ICA and FA approaches in the context of seismic exploration problems.

    The three methods are first tested on two multi-minima analytic objective functions often used to test optimization algorithms. Then, the three algorithms are compared on the residual statics corrections, which is a highly non-linear geophysical optimization problem characterized by an objective function with multiple minima. We remind, that the performances of a stochastic method in solving a particular problem may critically depend on the choice of the control parameters. Generally, it is difficult to give hard and fast rules that may work with a wide range of applications, although some guidelines and rules of thumb can be dictated by experience. The control parameters used in the following tests have been determined from a trial-and-error procedure with the aim of balancing the rate of convergence with the accuracy of the final solution.

    A brief overview of FA, ICA and QPSO. The FA is a quite new global swarm intelligence method proposed by Yang (2008) that is inspired by bioluminescence emitted by fireflies. In general, the intensity of the produced lights determines how fast other fireflies move toward the particular firefly. FA utilizes a population of fireflies to obtain the best solution in the model space. In each iteration, the brightness (i.e. the objective function value) of every firefly is compared with any other firefly, and each firefly attracts the other ones depending on its brightness; in other terms each firefly moves toward the fireflies with higher magnitude of brightness (that is a lower objective function value).

    The ICA was developed by Atashpaz-Gargari and Lucas (2007) and was inspired by the idea of imperialism in which powerful countries influence other countries through expanding their power and political system. The purpose of imperialism was either to take advantage of the resources of the colonized country or to prevent the other imperialists to take control over the colonies. Similar to other evolutionary algorithms, ICA starts with a random initial ensemble of countries. The power of each country is inversely correlated to the associated objective

  • GNGTS 2018 SeSSione 3.3

    723

    function value. Based on their power, some of the best initial countries become imperialists and start taking control of other countries (called colonies). The main operators of this algorithm are assimilation and revolution. Assimilation makes the colonies of each empire get closer to their imperialist. In addition to the assimilation, the position of some countries is randomly changed by using a parameter called revolution rate. This revolution process is used to preserve randomness of the algorithm. During assimilation and revolution, a colony might reach a better position and has the chance to take the control of the entire empire and replace the current imperialist state of the empire. Imperialist competition is another part of this algorithm, in which all the empires try to take possession of colonies of other empires. In each iteration, based on their power, all the empires have a chance to take control of one or more of the colonies of the weakest empire. These optimization tools are applied in each iteration until a stop criterion is satisfied.

    Particle swarm optimization is an evolutionary computation technique developed by Eberhart and Kennedy (1995) that is inspired by social behaviour of bird flocking and fish schooling. In the classical version of PSO the set of candidate solutions to the optimization problem is defined as a swarm of particles which may flow through the parameter space defining trajectories which are driven by their own and neighbours’ best performances. PSO has undergone a plethora of changes since its development. One of the recent developments in PSO is the inclusion of Quantum laws of mechanics within the PSO framework: this leads to the QPSO (Sun et al. 2004). In PSO, a particle is stated by its position and velocity vectors, which determine the trajectory of the particle. However, if we consider quantum mechanics, the term trajectory is meaningless, because position and velocity of a particle cannot be determined simultaneously according to uncertainty principle. Indeed, each single particle in QPSO is treated as a spin-less one moving in quantum space and the probability of the particle’s appearing at position xi in the model space is determined from a probability density function. Simply speaking, the QPSO algorithm allows all particles to move under quantum-mechanical rules rather than the classical Newtonian random motion. The QPSO algorithm has simpler evolutional equation forms and fewer parameters than the standard PSO, substantially facilitating the control and convergence in the search space.

    Tests on analytic objective functions. We now compare the three algorithms on analytic test functions as the number of model parameters increases. We consider 2, 4, 6, 8 and 10 unknowns and for each dimension and for each method we perform 20 different runs from which we count the number of model evaluations requested to find the global minimum within a maximum number of evaluated model (106) and within a previously selected accuracy (Sajeva et al., 2017). The first test function we use is the Rastrigin function equals to:

    (1)

    where n is the dimension of the model space. The global minimum is located at [0, …, 0]n and is surrounded by regularly distributed local minima. For this function the initial ensemble of candidate solutions in each method is 10 times the model space dimension, whereas the accuracy is set at 0.1. In Fig. 1a we observe that all the methods successfully converge for all the dimensions, although as expected the number of model evaluations requested increases as the dimension of the model space increases. FA and QPSO show very similar performances, whereas ICA requires a much higher number of model evaluations to attain convergence. In other words, ICA seems characterized by a slower convergence rate than the other two algorithms.

    The second test function is the Schwefel function:

    (2)

    This is an extremely difficult function to optimize, because the local minima are irregularly

  • 724

    GNGTS 2018 SeSSione 3.3

    distributed, and other local minima are distant from the non-centred global minimum (located at [420.9687, …,420.9687]n), or are even located at the opposite edge of the model space. For this function the initial ensemble of candidate solutions in each method is 100 times the model space dimension, whereas the accuracy is set at 1. Fig. 1b shows that in this case all the algorithms have a 100% convergence probability for a 2D case, whereas this percentage progressively decreases as the number of dimensions increases. The ICA is characterized by the most significant decrease of this percentage as the number of unknowns increases, whereas the FA is still able to successfully converge in 15 out of 20 tests for a 10-D model space. Note, as expected, that for a given model space dimension the number of model evaluations requested to converge in the Schwefel function is much higher than in the Rastrigin function. In this case, both ICA and QPSO require a number of model evaluations that is always one order of magnitude higher than that requested by FA. In other words, in this test the FA clearly outperforms the other two methods because it exhibits a faster convergence rate and successfully identifies the global minimum even in high dimensional model space.

    Residual statics corrections. In the following test we compare FA, ICA and QPSO on CMP-