hydrological cycle algorithm for continuous...
TRANSCRIPT
Research ArticleHydrological Cycle Algorithm for ContinuousOptimization Problems
AhmadWedyan JacquelineWhalley and Ajit Narayanan
School of Engineering Computer and Mathematical Sciences Auckland University of Technology Auckland New Zealand
Correspondence should be addressed to Ahmad Wedyan awedyanautacnz
Received 2 August 2017 Revised 13 November 2017 Accepted 22 November 2017 Published 17 December 2017
Academic Editor Efren Mezura-Montes
Copyright copy 2017 AhmadWedyan et alThis is an open access article distributed under the Creative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
Anewnature-inspired optimization algorithmcalled theHydrological CycleAlgorithm (HCA) is proposed based on the continuousmovement of water in nature In the HCA a collection of water drops passes through various hydrological water cycle stages suchas flow evaporation condensation and precipitation Each stage plays an important role in generating solutions and avoidingpremature convergence The HCA shares information by direct and indirect communication among the water drops whichimproves solution quality Similarities and differences between HCA and other water-based algorithms are identified and theimplications of these differences on overall performance are discussed A new topological representation for problems with acontinuous domain is proposed In proof-of-concept experiments the HCA is applied on a variety of benchmarked continuousnumerical functions The results were found to be competitive in comparison to a number of other algorithms and validate theeffectiveness of HCA Also demonstrated is the ability of HCA to escape from local optima solutions and converge to globalsolutions Thus HCA provides an alternative approach to tackling various types of multimodal continuous optimization problemsas well as an overall framework for water-based particle algorithms in general
1 Introduction
Optimization is the process of making something as goodas possible In computer science and operations researchan optimization problem can be defined as the problem offinding the best solution or better than previously known bestsolution among a set of feasible solutions by trying differentvariations of the input [1] In mathematics and engineeringresearchers use optimization algorithms to develop andimprove new ideas and models that can be expressed asa function of certain variables Optimization problems canbe categorized as either continuous or discrete based onthe nature of the variables in the optimization function Incontinuous optimization each variable may have any valuewithin a defined and specific range and they are commonlyreal numbers For discrete optimization each variable has avalue from a finite set of possible values or a permutation ofa set of numbers commonly as integer numbers [2]
A variety of continuous optimization problems (COPs)exist in the literature for benchmarking purposes [3] In thesefunctions the values for various continuous variables need
to be obtained such that the function is either minimizedor maximized These functions have various structures andcan be categorized as either unimodal or multimodal A uni-modal function has only one local optimum point whereas amultimodal function has several local optimum points withone or more points as a global optimum Each function mayconsist of a single variable or may be multivariate Typicallya unimodal function has one variable while a multimodalfunction hasmultiple variablesMathematically a continuousobjective function can be represented as follows
minimize 119891 (119883) 119877119899 rarr R (1)
where119883 is a vector that consists of 119899 decision variables119883 = 1199091 1199092 119909119899 isin R (2)
A solution 119909lowast is called a global minimizer of a problem 119891(119909)if 119891(119909lowast) le 119891(119909) for all 119909 isin 119883 Conversely it is called a globalmaximizer if 119891(119909lowast) ge 119891(119909) for all 119909 isin 119883
These functions are commonly used to evaluate the char-acteristics of new algorithms and to measure an algorithmrsquos
HindawiJournal of OptimizationVolume 2017 Article ID 3828420 25 pageshttpsdoiorg10115520173828420
2 Journal of Optimization
overall performance convergence rate robustness precisionand its ability to escape from local optima solutions [3]These empirical results can be used to compare any newalgorithm with existing optimization algorithms and to gainunderstanding of the algorithmrsquos behavior on different kindsof problem For instance multimodal functions can be usedto check the ability of an algorithm to escape local optimaand explore new regions especially when the global optimalis surrounded by many local optima Similarly flat surfacefunctions addmore difficulty to reaching global optima as theflatness does not give any indication of the direction towardsglobal optima [3]
Practically the number of variables and the domainof each variable (the function dimensionality) affect thecomplexity of the function In addition other factors such asthe number of localglobal optima points and the distributionof local optima compared with global optima are crucialissues in determining the complexity of a function especiallywhen a global minimum point is located very close to localminima points [3] Some of these functions represent real-lifeoptimization problems while others are artificial problemsFurthermore a function can be classified as unconstrainedor constrained An unconstrained function has no restrictionon the values of its variables
Due to the presence of a large variety of optimizationproblems it is hard to find a suitable algorithm that cansolve all types of optimization problems effectively On theother hand the No-Free-Lunch (NFL) theorems posit thatno particular algorithm performs better than all other algo-rithms for all problems and that what an algorithm gains inperformance on someproblems can be lost on other problems[5] Consequently we can infer that some algorithms aremore applicable and compatible with particular classes ofoptimization problems than othersTherefore there is alwaysa need to design and evaluate new algorithms for their poten-tial to improve performance on certain types of optimizationproblem For these reasons the main objective of this paperis to develop a new optimization algorithm and determineits performance on various benchmarked problems to gainunderstanding of the algorithm and evaluate its applicabilityto other unsolved problems
Many established nature-inspired algorithms especiallywater-based algorithms have proven successful in solvingCOPs Nevertheless these water-based algorithms do nottake into account the broader concepts of the hydrologicalwater cycle how the water drops and paths are formed or thefact that water is a renewable and recyclable resource Thatis previous water-based algorithms partially utilized somestages of the natural water cycle and demonstrated only someof the important aspects of a full hydrological water cycle forsolving COPs
Themain contribution of this paper is to delve deeper intohydrological theory and the fundamental concepts surround-ing water droplets to inform the design of a new optimizationalgorithmThis algorithm provides a new hydrological-basedconceptual frameworkwithin which existing and future workin water-based algorithms can be located In this paperwe present a new nature-inspired optimization algorithmcalled the Hydrological Cycle Algorithm (HCA) for solving
optimization problems HCA simulates naturersquos hydrologicalwater cycle More specifically it involves a collection of waterdrops passing through different phases such as flow (runoff)evaporation condensation and precipitation to generatea solution It can be considered as a swarm intelligenceoptimization algorithm for some parts of the cycle when acollection of water drops moves through the search spaceBut it can also be considered an evolutionary algorithmfor other parts of the cycle when information is exchangedand shared By using the full hydrological water cycle as aconceptual framework we show that previous water-basedalgorithms have predominantly only used swarm-like aspectsinspired by precipitation and flow HCA however uses allfour stages that will form a complete water-based approachto solving optimization problems efficiently In particular weshow that for certain problems HCA leads to improved per-formance and solution quality In particular we demonstratehow the condensation stage is utilized to allow informationsharing among the particles The information sharing helpsin improving the performance and reducing the number ofiterations needed to reach the optimal solution Our claimsfor HCA are to be judged not on improved efficiency but onimproved conceptual bioinspiration as well as contributionto the area of water-based algorithm However we alsoshow that HCA is competitive in comparison to severalother nature-inspired algorithms both water-based and non-water-based
In summary the paper presents a new overall conceptualframework for water-based algorithms which applies thehydrological cycle in its entirety to continuous optimizationproblems as a proof-of-concept (ie validates the full hydro-logical inspired framework)The benchmarkedmathematicaltest functions chosen in this paper are useful for evaluatingperformance characteristics of any new approach especiallythe effectiveness of exploration and exploitation processesSolving these functions (ie finding the global-optimalsolution) is challenging for most algorithms even with afew variables because their artificial landscapes may containmany local optimal solutions Our minimal criterion forsuccess is that theHCAalgorithmperformance is comparableto other well-known algorithms including those that onlypartially adopt the full HCA framework We maintain thatour results are competitive and that therefore water-inspiredresearchers and non-water-inspired researchers lose nothingby adopting the richer bioinspiration that comes with HCAin its entirety rather than adopting only parts of the fullwater cycle The HCArsquos performance is evaluated in terms offinding optimal solutions and computational effort (numberof iterations) The algorithmrsquos behavior its convergence rateand its ability to deal with multimodal functions are alsoinvestigated and analyzedThemain aimof this paper is not toprove the superiority of HCA over other algorithms Insteadthe aim is to provide researchers in science engineeringoperations research and machine learning with a new water-based tool for dealing with continuous variables in convexunconstrained and nonlinear problems Further details andfeatures of the HCA are described in Section 3
The remainder of this paper is organized as follows Sec-tion 2 discusses various related swarm intelligence techniques
Journal of Optimization 3
used to tackle COPs and distinguishes HCA from relatedalgorithms Section 3 describes the hydrological cycle innature and explains the proposedHCA Section 4 outlines theexperimental setup problem representation results obtainedand the comparative evaluation conducted with other algo-rithms Finally Section 5 presents concluding remarks andoutlines plans for further research
2 Previous Swarm Intelligence Approaches forSolving Continuous Problems
Many metaheuristic optimization algorithms have beendesigned and their performance has been evaluated onbenchmark functions Among these are nature-inspired algo-rithms which have received considerable attention Eachalgorithm uses a different approach and representation tosolve the target problem It isworthmentioning that our focusis on studies that involve solving unconstrained continuousfunctions with few variables If a new algorithm cannot dealwith these cases there is little chance of it dealing with morecomplex functions
Ant colony optimization (ACO) is one of the best-knowntechniques for solving discrete optimization problems andhas been extended to deal with COPs in many studies Forexample in [6] the authors divided the function domain intoa certain number of regions representing the trial solutionspace for the ants to explore Then they generated two typesof ants (global and local ants) and distributed them to explorethese regions The global ants were responsible for updatingthe fitness of the regions globally whereas the local ants didthe same locally The approach incorporated mutation andcrossover operations to improve the solution quality
Socha and Dorigo [7] extended the ACO algorithm totackle continuous domain problemsThe extension facilitatedconsideration of continuous probability instead of discreteprobability and incorporated the Gaussian probability den-sity function to choose the next point The pheromoneevaporation procedure was modified and old solutions wereremoved from the candidate solutions pool and replacedwith new and better solutions A weight associated with eachant solution represented the fitness of the solution OtherACO inspired approaches with various implementationsthat do not follow the original ACO algorithm structurehave also been developed to solve COPs [8ndash11] In furtherwork the performance of ACO was evaluated on continuousfunction optimization [12] One of the problems with ACOis that reinforcement of paths is the only method for sharinginformation between the ants in an indirect way A heavilypheromone-reinforced path is naturally considered a goodpath for other ants to follow No other mechanism existsto allow ants to share information for exploration purposesexcept indirectly through the evaporation of pheromone overtime Exploration diminishes over time with ACO whichsuits problems where there may be only one global solutionBut if the aim is to find a number of alternative solutionsthe rate of evaporation has to be strictly controlled to ensurethat no single path dominates Evaporation control can leadto other side-effects in the classic ACO paradigm such aslonger convergence time resulting in the need to add possibly
nonintuitive information sharingmechanisms such as globalupdates for exploration purposes
Although strictly speaking they are not a swarm intel-ligence approach genetic algorithms (GAs) [13] have beenapplied to many COPs A GA uses simple operations suchas crossover and mutation to manipulate and direct thepopulation and to help escape from local optima The valuesof the variables are encoded using binary or real numbers[14] However as with other algorithms GA performance candepend critically on the initial populationrsquos values which areusually random This was clarified by Maaranen et al [15]who investigated the importance and effect of the initial pop-ulation values on the quality of the final solution In [16] theauthors also focused on the choice of the initial populationvalues and modified the algorithm to intensify the searchin the most promising area of the solution space The mainproblem with standard GAs for optimization continues tobe the lack of ldquolong-termismrdquo where fitness for exploitationmay need to be secondary to allow for adequate explorationto assess the problem space for alternative solutions Onepossible way to deal with this problem is to hybridize the GAwith other approaches such as in [17] where a hybrid GA andparticle swarm optimization (PSO) is proposed for solvingmultimodal functions
James and Russell [18] examined the performance of thePSO algorithm on some continuous optimization functionsLater researchers such as Chen et al [19] modified the PSOalgorithm to help overcome the problem of premature con-vergence Additionally PSO is used for global optimizationin [20] The PSO is also hybridized with ACO to improve theperformance when dealing with high dimension multimodalfunctions [21] Further a comparative study between GA andPSO on numerical benchmark problems can be found in[22] Other versions of the PSO algorithmwith crossover andmutation operators to improve the performance have alsobeen proposed such as in [23] Once such GA operators areintroduced the question arises as to what advantages a PSOapproach has over a standard evolutionary approach to thesame problem
The bees algorithm (BA) has been used to solve a numberof benchmark functions [24] This algorithm simulates thebehavior of swarms of honey bees during food foragingIn BA scout bees are sent to search for food sources bymoving randomly from one place to another On returningto the hive they share the information with the other beesabout location distance and quality of the food sourceThe algorithm produces good results for several benchmarkfunctions However the information sharing is direct andconfined to subsets of bees leading to the possibility ofconvergence to a local optimum or divergence away from theglobal optimum
The grenade explosion method (GSM) is an evolutionaryalgorithmwhich has been used to optimize real-valued prob-lems [25] The GSM is based on the mechanism associatedwith a grenade explosion which is intended to overcomeproblems associated with ldquocrowdingrdquo where candidate solu-tions have converged to a local minimum When a grenadeexplodes shrapnel pieces are thrown to destroy objects in thevicinity of the explosion In the GSM losses are calculated
4 Journal of Optimization
1 2 i
0
1 1
0 0
1 1
0
Mtimes Pi + 1 middot middot middot
Figure 1 Representation of a continuous problem in the IWD algorithm
for each piece of shrapnel and the effect of each pieceof shrapnel is calculated The highest loss effect representsvaluable objects existing in that area Then another grenadeis thrown to the area to cause greater loss and so onOverall the GSM was able to find the global minimum fasterthan other algorithms in seven out of eight benchmarkedtests However the algorithm requires the maintenance ofnonintuitive territory radius preventing shrapnel pieces fromcoming too close to each other and forcing shrapnel tospread uniformly over the search space Also required is anexplosion range for the shrapnel These control parameterswork well with problem spaces containingmany local optimaso that global optima can be found after suitable explorationHowever the relationship between these control parametersand problem spaces of unknown topology is not clear
21 Water-Based Algorithms Algorithms inspired by waterflow have become increasingly common for tackling opti-mization problems Two of the best-known examples areintelligent water drops (IWD) and the water cycle algorithm(WCA) These algorithms have been shown to be successfulin many applications including various COPs The IWDalgorithm is a swarm-based algorithm inspired by the groundflow of water in nature and the actions and reactions thatoccur during that process [26] In IWD a water drop hastwo properties movement velocity and the amount of soilit carries These properties are changed and updated foreach water drop throughout its movement from one pointto another The velocity of a water drop is affected anddetermined only by the amount of soil between two pointswhile the amount of soil it carries is equal to the amount ofsoil being removed from a path [26]
The IWD has also been used to solve COPs Shah-Hosseini [27] utilized binary variable values in the IWDand created a directed graph of (119872 times 119875) nodes in which119872 identifies the number of variables and 119875 identifies theprecision (number of bits for each variable) which wasassumed to be 32 The graph comprised 2 times (119872times 119875) directededges connecting nodes with each edge having a value ofzero or one as depicted in Figure 1 One of the advantagesof this representation is that it is intuitively consistent withthe natural idea of water flowing in a specific direction
Also the IWD algorithm was augmented with amutation-based local search to enhance its solution qualityIn this process an edge is selected at random from a solutionand replaced by another edge which is kept if it improvesthe fitness value and the process was repeated 100 times foreach solution This mutation is applied to all the generatedsolutions and then the soil is updated on the edges belongingto the best solution However the variablesrsquo values have tobe converted from binary to real numbers to be evaluated
Also once mutation (but not crossover) is introduced intoIWD the question arises as what advantages IWD has overstandard evolutionary algorithms that use simpler notationsfor representing continuous numbers Detailed differencesbetween IWD and our HCA are described in Section 39
Various researchers have applied the WCA to COPs [2829]TheWCA is also based on the groundwater flow throughrivers and streams to the sea The algorithm constructs asolution using a population of streams where each stream isa candidate solution It initially generates random solutionsfor the problem by distributing the streams into differentpositions according to the problemrsquos dimensions The beststream is considered a sea and a specific number of streamsare considered rivers Then the streams are made to flowtowards the positions of the rivers and sea The position ofthe sea indicates the global-best solution whereas those of therivers indicate the local best solution points In each iterationthe position of each stream may be swapped with any ofthe river positions or (counterintuitively) the sea positionaccording to the fitness of that stream Therefore if a streamsolution is better than that of a river that stream becomesa river and the corresponding river becomes a stream Thesame process occurs between rivers and sea Evaporationoccurs in the riversstreams when the distance between ariverstream and the sea is very small (close to zero) Fol-lowing evaporation new streams are generated with differentpositions in a process that can be described as rain Theseprocesses help the algorithm to escape from local optima
WCA treats streams rivers and the sea as set of movingpoints in a multidimensional space and does not considerstreamriver formation (movements of water and soils)Therefore the overall structure of the WCA is quite similarto that of the PSO algorithm For instance the PSO algorithmhas Gbest and Pbest points that indicate the global and localsolutions with the Gbest point guiding the Pbest points toconverge to that point In a similar manner the WCA has anumber of rivers that represent the local optimal solutionsand a sea position represents the global-optimal solutionThesea is used as a guide for the streams and rivers Moreoverin PSO a particle at the Pbest point updates its positiontowards the Gbest point Analogously the WCA exchangesthe positions of rivers with that of the sea when the fitness ofthe river is better The main difference is the consideration ofthe evaporation and raining stages that help the algorithm toescape from local optima
An improved version ofWCAcalled dual-system (DSWCA)has been presented for solving constrained engineeringoptimization problems [30] The DSWCA improvementsinvolved adding two cycles (outer and inner)The outer cycleis in accordance with the WCA and is designed to performexploration The inner cycle which is based on the ocean
Journal of Optimization 5
cycle was designed as an exploitation processThe confluencebetween rivers process is also included to improve conver-gence speed The improvements aim to help the originalWCA to avoid falling into the local optimal solutions TheDSWCA was able to provide better results than WCA In afurther extension of WCA Heidari et al proposed a chaoticWCA (CWCA) that incorporates thirteen chaotic patternswith WCA movement process to improve WCArsquos perfor-mance and deal with the premature convergence problem[31] The experimental results demonstrated improvement incontrolling the premature convergence problem HoweverWCA and its extensions introduce many additional featuresmany of which are not compatible with nature-inspiration toimprove performance andmakeWCA competitive with non-water-based algorithms Detailed differences between WCAand our HCA are described in Section 39
The review of previous approaches shows that nature-inspired swarm approaches tend to introduce evolutionaryconcepts of mutation and crossover for sharing informa-tion to adopt nonplausible global data structures to pre-vent convergence to local optima or add more central-ized control parameters to preserve the balance betweenexploration and exploitation in increasingly difficult problemdomains thereby compromising self-organization Whensuch approaches are modified to deal with COPs complexand unnatural representations of numbers may also berequired that add to the overheads of the algorithm as well asleading towhatmay appear to be ldquoforcedrdquo and unnatural waysof dealing with the complexities of a continuous problem
In swarm algorithms a number of cooperative homoge-nous entities explore and interact with the environment toachieve a goal which is usually to find the global-optimalsolution to a problem through exploration and exploita-tion Cooperation can be accomplished by some form ofcommunication that allows the swarm members to shareexploration and exploitation information to produce high-quality solutions [32 33] Exploration aims to acquire asmuch information as possible about promising solutionswhile exploitation aims to improve such promising solutionsControlling the balance between exploration and exploitationis usually considered critical when searching formore refinedsolutions as well as more diverse solutions Also important isthe amount of exploration and exploitation information to beshared and when
Information sharing in nature-inspired algorithms usu-ally includes metrics for evaluating the usefulness of infor-mation and mechanisms for how it should be shared Inparticular these metrics and mechanisms should not containmore knowledge or require more intelligence than can beplausibly assigned to the swarm entities where the emphasisis on emergence of complex behavior from relatively simpleentities interacting in nonintelligent ways with each otherInformation sharing can be done either randomly or non-randomly among entities Different approaches are used toshare information such as direct or indirect interaction Twosharing models are one-to-one in which explicit interactionoccurs between entities and many-to-many (broadcasting)in which interaction occurs indirectly between the entitiesthrough the environment [34]
Usually either direct or indirect interaction betweenentities is employed in an algorithm for information sharingand the use of both types in the same algorithm is oftenneglected For instance the ACO algorithm uses indirectcommunication for information sharing where ants layamounts of pheromone on paths depending on the usefulnessof what they have exploredThemore promising the path themore the pheromone added leading other ants to exploitingthis path further Pheromone is not directed at any other antin particular In the artificial bee colony (ABC) algorithmbees share information directly (specifically the directionand distance to flowers) in a kind of waggle dancing [35]The information received by other bees depends on whichbee they observe and not the information gathered from allbees In a GA the information exchange occurs directly inthe form of crossover operations between selected members(chromosomes and genes) of the population The qualityof information shared depends on the members chosenand there is no overall store of information available to allmembers In PSO on the other hand the particles havetheir own information as well as access to global informationto guide their exploitation This can lead to competitiveand cooperative optimization techniques [36] However noattempt is made in standard PSO to share informationpossessed by individual particles This lack of individual-to-individual communication can lead to early convergence ofthe entire population of particles to a nonoptimal solutionSelective information sharing between individual particleswill not always avoid this narrow exploitation problem butif undertaken properly could allow the particles to findalternative andmore diverse solution spaces where the globaloptimum lies
As can be seen from the above discussion the distinctionsbetween communication (direct and indirect) and informa-tion sharing (random or nonrandom) can become blurredIn HCA we utilize both direct and indirect communicationto share information among selected members of the wholepopulation Direct communication occurs in the condensa-tion stage of the water cycle where some water drops collidewith each other when they evaporate into the cloud Onthe other hand indirect communication occurs between thewater drops and the environment via the soil and path depthUtilizing information sharing helps to diversify the searchspace and improve the overall performance through betterexploitation In particular we show howHCAwith direct andindirect communication suits continuous problems whereindividual particles share useful information directly whensearching a continuous space
Furthermore in some algorithms indirect communica-tion is used not just to find a possible solution but alsoto reinforce an already found promising solution Suchreinforcement may lead the algorithm to getting stuck in thesame solution which is known as stagnation behavior [37]Such reinforcement can also cause a premature convergencein some problems An important feature used in HCA toavoid this problem is the path depth The depths of paths areconstantly changing where deep paths will have less chanceof being chosen compared with shallow paths More detailswill be provided about this feature in Section 33
6 Journal of Optimization
In summary this paper aims to introduce a novel nature-inspired approach for dealing with COPs which also intro-duces a continuous number representation that is naturalfor the algorithm The cyclic nature of the algorithm con-tributes to self-organization as the system converges to thesolution through direct and indirect communication betweenparticles feedback from the environment and internal self-evaluation Finally our HCA addresses some of the weak-nesses of other similar algorithms HCA is inspired by the fullwater cycle not parts of it as exemplified by IWD andWCA
3 The Hydrological Cycle Algorithm
Thewater cycle also known as the hydrologic cycle describesthe continuous movement of water in nature [38] It isrenewable because water particles are constantly circulatingfrom the land to the sky and vice versaTherefore the amountof water remains constant Water drops move from one placeto another by natural physical processes such as evaporationprecipitation condensation and runoff In these processesthe water passes through different states
31 Inspiration The water cycle starts when surface watergets heated by the sun causingwater fromdifferent sources tochange into vapor and evaporate into the air Then the watervapor cools and condenses as it rises to become drops Thesedrops combine and collect with each other and eventuallybecome so saturated and dense that they fall back becauseof the force of gravity in a process called precipitation Theresulting water flows on the surface of the Earth into pondslakes or oceans where it again evaporates back into theatmosphere Then the entire cycle is repeated
In the flow (runoff) stage the water drops start movingfrom one location to another based on Earthrsquos gravity and theground topography When the water is scattered it chooses apathwith less soil and fewer obstacles Assuming no obstaclesexist water drops will take the shortest path towards thecenter of the Earth (ie oceans) owing to the force of gravityWater drops have force because of this gravity and this forcecauses changes in motion depending on the topology Asthe water flows in rivers and streams it picks up sedimentsand transports them away from their original locationTheseprocesses are known as erosion and deposition They help tocreate the topography of the ground by making new pathswith more soil removal or the fading of others as theybecome less likely to be chosen owing to too much soildeposition These processes are highly dependent on watervelocity For instance a river is continually picking up anddropping off sediments from one point to another Whenthe river flow is fast soil particles are picked up and theopposite happens when the river flow is slowMoreover thereis a strong relationship between the water velocity and thesuspension load (the amount of dissolved soil in the water)that it currently carries The water velocity is higher whenonly a small amount of soil is being carried and lower whenlarger amounts are carried
In addition the term ldquoflow raterdquo or ldquodischargerdquo can bedefined as the volume of water in a river passing a defined
point over a specific period Mathematically flow rate iscalculated based on the following equation [39]119876 = 119881 times 119860 997904rArr[119876 = 119881 times (119882 times 119863)] (3)
where119876 is flow rate119860 is cross-sectional area119881 is velocity119882is width and119863 is depthThe cross-sectional area is calculatedby multiplying the depth by the width of the stream Thevelocity of the flowing water is defined as the quantity ofdischarge divided by the cross-sectional areaTherefore thereis an inverse relationship between the velocity and the cross-sectional area [40] The velocity increases when the cross-sectional area is small and vice versa If we assume that theriver has a constant flow rate (velocity times cross-sectional area= constant) and that the width of the river is fixed then wecan conclude that the velocity is inversely proportional to thedepth of the river in which case the velocity decreases in deepwater and vice versa Therefore the amount of soil depositedincreases as the velocity decreases This explains why less soilis taken from deep water and more soil is taken from shallowwater A deep river will have water moving more slowly thana shallow river
In addition to river depth and force of gravity there areother factors that affect the flow velocity of water drops andcause them to accelerate or decelerate These factors includeobstructions amount of soil existing in the path topographyof the land variations in channel cross-sectional area and theamount of dissolved soil being carried (the suspension load)
Water evaporation increases with temperature withmax-imum evaporation at 100∘C (boiling point) Converselycondensation increases when the water vapor cools towards0∘CThe evaporation and condensation stages play importantroles in the water cycle Evaporation is necessary to ensurethat the system remains in balance In addition the evapo-ration process helps to decrease the temperature and keepsthe air moist Condensation is a very important process asit completes the water cycle When water vapor condensesit releases the same amount of thermal energy into theenvironment that was needed to make it a vapor Howeverin nature it is difficult to measure the precise evaporationrate owing to the existence of different sources of evaporationhence estimation techniques are required [38]
The condensation process starts when the water vaporrises into the sky then as the temperature decreases inthe higher layer of the atmosphere the particles with lesstemperature have more chance to condense [41] Figure 2(a)illustrates the situation where there are no attractions (orconnections) between the particles at high temperature Asthe temperature decreases the particles collide and agglom-erate to form small clusters leading to clouds as shown inFigure 2(b)
An interesting phenomenon in which some of the largewater drops may eliminate other smaller water drops occursduring the condensation process as some of the waterdropsmdashknown as collectorsmdashfall from a higher layer to alower layer in the atmosphere (in the cloud) Figure 3 depictsthe action of a water drop falling and colliding with smaller
Journal of Optimization 7
(a) Hot water (b) Cold water
Figure 2 Illustration of the effect of temperature on water drops
Figure 3 A collector water drop in action
water drops Consequently the water drop grows as a resultof coalescence with the other water drops [42]
Based on this phenomenon only water drops that aresufficiently large will survive the trip to the ground Of thenumerous water drops in the cloud only a portion will makeit to the ground
32 The Proposed Algorithm Given the brief description ofthe hydrological cycle above the IWD algorithm can beinterpreted as covering only one stage of the overall watercyclemdashthe flow stage Although the IWDalgorithm takes intoaccount some of the characteristics and factors that affectthe flowing water drops through rivers and the actions andreactions along the way the IWD algorithm neglects otherfactors that could be useful in solving optimization problemsthrough adoption of other key concepts taken from the fullhydrological process
Studying the full hydrological water cycle gave us theinspiration to design a new and potentially more powerfulalgorithm within which the original IWD algorithm can beconsidered a subpart We divide our algorithm into fourmain stages Flow (Runoff) Evaporation Condensation andPrecipitation
The HCA can be described formally as consisting of a setof artificial water drops (WDs) such that
HCA = WD1WD2WD3 WD119899 where 119899 ge 1 (4)
The characteristics associated with each water drop are asfollows
(i) 119881WD the velocity of the water drop
(ii) SoilWD the amount of soil the water drop carries(iii) 120595WD the solution quality of the water drop
The input to the HCA is a graph representation of a problemsrsquosolution spaceThe graph can be a fully connected undirectedgraph119866 = (119873 119864) where119873 is the set of nodes and 119864 is the setof edges between the nodes In order to find a solution thewater drops are distributed randomly over the nodes of thegraph and then traverse the graph (119866) according to variousrules via the edges (119864) searching for the best or optimalsolutionThe characteristics associated with each edge are theinitial amount of soil on the edge and the edge depth
The initial amount of soil is the same for all the edgesThe depth measures the distance from the water surface tothe riverbed and it can be calculated by dividing the lengthof the edge by the amount of soil Therefore the depth variesand depends on the amount of soil that exists on the pathand its length The depth of the path increases when moresoil is removed over the same length A deep path can beinterpreted as either the path being lengthy or there being lesssoil Figures 4 and 5 illustrate the relationship between pathdepth soil and path length
The HCA goes through a number of cycles and iterationsto find a solution to a problem One iteration is consideredcomplete when all water drops have generated solutionsbased on the problem constraints A water drop iterativelyconstructs a solution for the problemby continuouslymovingbetween the nodes Each iteration consists of specific steps(which will be explained below) On the other hand a cyclerepresents a varied number of iterations A cycle is consideredas being complete when the temperature reaches a specificvalue which makes the water drops evaporate condenseand precipitate again This procedure continues until thetermination condition is met
33 Flow Stage (Runoff) This stage represents the con-struction stage where the HCA explores the search spaceThis is carried out by allowing the water drops to flow(scattered or regrouped) in different directions and con-structing various solutions Each water drop constructs a
8 Journal of Optimization
Depth = 101000= 001
Soil = 500
Length = 10
Depth = 20500 = 004
Soil = 500
Length = 20
Figure 4 Depth increases with length increases for the same amount of soil
Depth = 10500= 002
Soil = 500
Length = 10
Soil = 1000
Length = 10
Depth = 101000= 001
Figure 5 Depth increases when the amount of soil is reduced over the same length
solution incrementally by moving from one point to anotherWhenever a water drop wants to move towards a new nodeit has to choose between different numbers of nodes (variousbranches) It calculates the probability of all the unvisitednodes and chooses the highest probability node taking intoconsideration the problem constraints This can be describedas a state transition rule that determines the next node tovisit In HCA the node with the highest probability will beselected The probability of the node is calculated using119875WD
119894 (119895)= 119891 (Soil (119894 119895))2 times 119892 (Depth (119894 119895))sum119896notinV119888(WD) (119891 (Soil (119894 119896))2 times 119892 (Depth (119894 119896))) (5)
where 119875WD119894(119895) is the probability of choosing node 119895 from
node 119894 119891(Soil(119894 119895)) is equal to the inverse of the soil between119894 and 119895 and is calculated using119891 (Soil (119894 119895)) = 1120576 + Soil (119894 119895) (6)
120576 = 001 is a small value that is used to prevent division byzero The second factor of the transition rule is the inverse ofdepth which is calculated based on119892 (Depth (119894 119895)) = 1
Depth (119894 119895) (7)
Depth(119894 119895) is the depth between nodes 119894 and 119895 and iscalculated by dividing the length of the path by the amountof soil This can be expressed as follows
Depth (119894 119895) = Length (119894 119895)Soil (119894 119895) (8)
The depth value can be normalized to be in the range [1ndash100]as it might be very small The depth of the path is constantlychanging because it is influenced by the amount of existing
S
A
B
S
A
B
S
A
B
Figure 6 The relationship between soil and depth over the samelength
soil where the depth is inversely proportional to the amountof the existing soil in the path of a fixed length Waterdrops will choose paths with less depth over other pathsThis enables exploration of new paths and diversifies thegenerated solutions Assume the following scenario (depictedby Figure 6) where water drops have to choose betweennodes 119860 or 119861 after node 119878 Initially both edges have the samelength and same amount of soil (line thickness represents soilamount) Consequently the depth values for the two edgeswill be the sameAfter somewater drops chose node119860 to visitthe edge (S-A) as a result has less soil and is deeper Accordingto the new state transition rule the next water drops will beforced to choose node 119861 because edge (S-B) will be shallowerthan (S-A)This technique provides a way to avoid stagnationand explore the search space efficiently
331 VelocityUpdate After selecting the next node thewaterdrop moves to the selected node and marks it as visited Eachwater drop has a variable velocity (V) The velocity of a waterdropmight be increased or decreasedwhile it ismoving basedon the factors mentioned above (the amount of soil existingon the path the depth of the path etc) Mathematically thevelocity of a water drop at time (119905 + 1) can be calculated using
119881WD119905+1 = [119870 times 119881WD
119905 ] + 120572( 119881WD
Soil (119894 119895)) + 2radic 119881WD
SoilWD
+ ( 100120595WD) + 2radic 119881WD
Depth (119894 119895) (9)
Journal of Optimization 9
Equation (9) defines how the water drop updates its velocityafter each movement Each part of the equation has an effecton the new velocity of the water drop where 119881WD
119905 is thecurrent water drop velocity and 119870 is a uniformly distributedrandom number between [0 1] that refers to the roughnesscoefficient The roughness coefficient represents factors thatmay cause the water drop velocity to decrease Owing todifficulties measuring the exact effect of these factors somerandomness is considered with the velocity
The second term of the expression in (9) reflects therelationship between the amount of soil existing on a path andthe velocity of a water drop The velocity of the water dropsincreases when they traverse paths with less soil Alpha (120572)is a relative influence coefficient that emphasizes this part in
the velocity update equationThe third term of the expressionrepresents the relationship between the amount of soil thata water drop is currently carrying and its velocity Waterdrop velocity increases with decreasing amount of soil beingcarriedThis gives weakwater drops a chance to increase theirvelocity as they do not holdmuch soilThe fourth term of theexpression refers to the inverse ratio of a water droprsquos fitnesswith velocity increasing with higher fitness The final term ofthe expression indicates that a water drop will be slower in adeep path than in a shallow path
332 Soil Update Next the amount of soil existing on thepath and the depth of that path are updated A water dropcan remove (or add) soil from (or to) a path while movingbased on its velocity This is expressed by
Soil (119894 119895) = [119875119873 lowast Soil (119894 119895)] minus ΔSoil (119894 119895) minus 2radic 1
Depth (119894 119895) if 119881WD ge Avg (all119881WDS) (Erosion)[119875119873 lowast Soil (119894 119895)] + ΔSoil (119894 119895) + 2radic 1
Depth (119894 119895) else (Deposition) (10)
119875119873 represents a coefficient (ie sediment transport rate orgradation coefficient) that may affect the reduction in theamount of soil The soil can be removed only if the currentwater drop velocity is greater than the average of all otherwater drops Otherwise an amount of soil is added (soildeposition) Consequently the water drop is able to transferan amount of soil from one place to another and usuallytransfers it from fast to slow parts of the path The increasingsoil amount on some paths favors the exploration of otherpaths during the search process and avoids entrapment inlocal optimal solutions The amount of soil existing betweennode 119894 and node 119895 is calculated and changed usingΔSoil (119894 119895) = 1
timeWD119894119895
(11)
such that
timeWD119894119895 = Distance (119894 119895)119881WD
119905+1
(12)
Equation (11) shows that the amount of soil being removedis inversely proportional to time Based on the second lawof motion time is equal to distance divided by velocityTherefore time is proportional to velocity and inverselyproportional to path length Furthermore the amount ofsoil being removed or added is inversely proportional to thedepth of the pathThis is because shallow soils will be easy toremove compared to deep soilsTherefore the amount of soilbeing removed will decrease as the path depth increasesThisfactor facilitates exploration of new paths because less soil isremoved from the deep paths as they have been used manytimes before This may extend the time needed to search forand reach optimal solutionsThedepth of the path needs to beupdatedwhen the amount of soil existing on the path changesusing (8)
333 Soil Transportation Update In the HCA we considerthe fact that the amount of soil a water drop carries reflects itssolution qualityThis can be done by associating the quality ofthe water droprsquos solution with its carrying soil value Figure 7illustrates the fact that a water drop with more carrying soilhas a better solution
To simulate this association the amount of soil that thewater drop is carrying is updated with a portion of the soilbeing removed from the path divided by the last solutionquality found by that water drop Therefore the water dropwith a better solution will carry more soil This can beexpressed as follows
SoilWD = SoilWD + ΔSoil (119894 119895)120595WD (13)
The idea behind this step is to let a water drop preserves itsprevious fitness value Later the amount of carrying soil valueis used in updating the velocity of the water drops
334 Temperature Update The final step that occurs at thisstage is updating of the temperature The new temperaturevalue depends on the solution quality generated by the waterdrops in the previous iterations The temperature will beupdated as follows
Temp (119905 + 1) = Temp (119905) + +ΔTemp (14)
where
ΔTemp = 120573 lowast (Temp (119905)Δ119863 ) Δ119863 gt 0Temp (119905)10 otherwise
(15)
10 Journal of Optimization
Better
Figure 7 Relationship between the amount of carrying soil and itsquality
and where coefficient 120573 is determined based on the problemThe difference (Δ119863) is calculated based onΔ119863 = MaxValue minusMinValue (16)
such that
MaxValue = max [Solutions Quality (WDs)] MinValue = min [Solutions Quality (WDs)] (17)
According to (16) increase in temperature will be affected bythe difference between the best solution (MinValue) and theworst solution (MaxValue) Less difference means that therewas not a lot of improvement in the last few iterations and asa result the temperature will increase more This means thatthere is no need to evaporate the water drops as long as theycan find different solutions in each iteration The flow stagewill repeat a few times until the temperature reaches a certainvalue When the temperature is sufficient the evaporationstage begins
34 Evaporation Stage In this stage the number of waterdrops that will evaporate (the evaporation rate) when thetemperature reaches a specific value is first determined Theevaporation rate is chosen randomly between one and thetotal number of water drops
ER = Random (1119873) (18)
where ER is evaporation rate and119873 is number ofwater dropsAccording to the evaporation rate a specific number of waterdrops is selected to evaporateThe selection is done using theroulette-wheel selection method taking into considerationthe solution quality of each water drop Only the selectedwater drops evaporate and go to the next stage
35 Condensation Stage As the temperature decreases waterdrops draw closer to each other and then collide and combineor bounce off These operations are useful as they allowthe water drops to be in direct physical contact with eachother This stage represents the collective and cooperativebehavior between all the water drops A water drop cangenerate a solution without direct communication (by itself)however communication between water drops may lead tothe generation of better solutions in the next cycle In HCAphysical contact is used for direct communication betweenwater drops whereas the amount of soil on the edges canbe considered indirect communication between the water
drops Direct communication (information exchange) hasbeen proven to improve solution quality significantly whenapplied by other algorithms [37]
The condensation stage is considered to be a problem-dependent stage that can be redesigned to fit the problemspecification For example one way of implementing thisstage to fit the Travelling Salesman Problem (TSP) is to uselocal improvement methods Using these methods enhancesthe solution quality and generates better results in the nextcycle (ie minimizes the total cost of the solution) with thehypothesis that it will reduce the number of iterations neededand help to reach the optimalnear-optimal solution fasterEquation (19) shows the evaporatedwater (EW) selected fromthe overall population where 119899 represents the evaporationrate (number of evaporated water drops)
EW = WD1WD2 WD119899 (19)
Initially all the possible combinations (as pairs) are selectedfrom the evaporated water drops to share their informationwith each other via collision When two water drops collidethere is a chance either of the two water drops merging(coalescence) or of them bouncing off each other In naturedetermining which operation will take place is based onfactors such as the temperature and the velocity of each waterdrop In this research we considered the similarity betweenthe water dropsrsquo solutions to determine which process willoccur When the similarity is greater than or equal to 50between two water dropsrsquo solutions they merge Otherwisethey collide and bounce off each other Mathematically thiscan be represented as follows
OP (WD1WD2)= Bounce (WD1WD2) Similarity lt 50
Merge (WD1WD2) Similarity ge 50 (20)
We use the Hamming distance [43] to count the number ofdifferent positions in two series that have the same length toobtain a measure of the similarity between water drops Forexample the Hamming distance between 12468 and 13458 istwoWhen twowater drops collide andmerge onewater drop(ie the collector) will becomemore powerful by eliminatingthe other one in the process also acquiring the characteristicsof the eliminated water drop (ie its velocity)
WD1Vel = WD1Vel +WD2Vel (21)
On the other hand when two water drops collide and bounceoff they will share information with each other about thegoodness of each node and how much a node contributes totheir solutions The bounce-off operation generates informa-tion that is used later to refine the quality of the water dropsrsquosolutions in the next cycle The information is available to allwater drops and helps them to choose a node that has a bettercontribution from all the possible nodes at the flow stageTheinformation consists of the weight of each node The weightmeasures a node occurrence within a solution over the waterdrop solution qualityThe idea behind sharing these details is
Journal of Optimization 11
to emphasize the best nodes found up to that point Withinthis exchange the water drops will favor those nodes in thenext cycle Mathematically the weight can be calculated asfollows
Weight (node) = NodeoccurrenceWDsol
(22)
Finally this stage is used to update the global-best solutionfound up to that point In addition at this stage thetemperature is reduced by 50∘C The lowering and raisingof the temperature helps to prevent the water drops fromsticking with the same solution every iteration
36 Precipitation Stage This stage is considered the termina-tion stage of the cycle as the algorithm has to check whetherthe termination condition is met If the condition has beenmet the algorithm stops with the last global-best solutionOtherwise this stage is responsible for reinitializing all thedynamic variables such as the amount of the soil on eachedge depth of paths the velocity of each water drop and theamount of soil it holds The reinitialization of the parametershelps the algorithm to avoid being trapped in local optimawhich may affect the algorithmrsquos performance in the nextcycle Moreover this stage is considered as a reinforcementstage which is used to place emphasis on the best water drop(the collector) This is achieved by reducing the amount ofsoil on the edges that belong to the best water drop solution
Soil (119894 119895) = 09 lowast soil (119894 119895) forall (119894 119895) isin BestWD (23)
The idea behind that is to favor these edges over the otheredges in the next cycle
37 Summarization of HCA Characteristics TheHCA can bedistinguished from other swarm algorithms in the followingways
(1) HCA involves a number of cycles and number ofiterations (multiple iterations per cycle)This gives theadvantage of performing certain tasks (eg solutionimprovements methods) every cycle instead of everyiteration to reduce the computational effort In addi-tion the cyclic nature of the algorithm contributesto self-organization as the system converges to thesolution through feedback from the environmentand internal self-evaluationThe temperature variablecontrols the cycle and its value is updated accordingto the quality of the solution obtained in everyiteration
(2) The flow of the water drops is controlled by pathsdepth and soil amount heuristics (indirect commu-nication) Paths depth factor affects the movement ofwater drops and helps to avoid choosing the samepaths by all water drops
(3) Water drops are able to gain or lose portion of theirvelocity This idea supports the competition betweenwater drops and prevents the sweep of one drop forthe rest of the drops because a high-velocity water
HCA procedure(i) Problem input(ii) Initialization of parameters(iii)While (termination condition is not met)
Cycle = Cycle + 1 Flow stageRepeat
(a) Iteration = iteration + 1(b) For each water drop
(1) Choose next node (Eqs (5)ndash(8))(2) Update velocity (Eq (9))(3) Update soil and depth (Eqs (10)-(11))(4) Update carrying soil (Eq (13))
(c) End for(d) Calculate the solution fitness(e) Update local optimal solution(f) Update Temperature (Eqs (14)ndash(17))
Until (Temperature = Evaporation Temperature)Evaporation (WDs)Condensation (WDs)Precipitation (WDs)
(iv) End while
Pseudocode 1 HCA pseudocode
drop can carve more paths (ie remove more soils)than slower one
(4) Soil can be deposited and removed which helpsto avoid a premature convergence and stimulatesthe spread of drops in different directions (moreexploration)
(5) Condensation stage is utilized to perform informa-tion sharing (direct communication) among thewaterdrops Moreover selecting a collector water droprepresents an elitism technique and that helps toperform exploitation for the good solutions Furtherthis stage can be used to improve the quality of theobtained solutions to reduce the number of iterations
(6) The evaporation process is a selection technique andhelps the algorithm to escape from local optimasolutionsThe evaporation happenswhen there are noimprovements in the quality of the obtained solutions
(7) Precipitation stage is used to reinitialize variables andredistribute WDs in the search space to generate newsolutions
Condensation evaporation and precipitation ((5) (6) and(7)) above distinguish HCA from other water-based algo-rithms including IWD and WCA These characteristics areimportant and help make a proper balance between explo-ration exploitation which helps in convergence towards theglobal solution In addition the stages of the water cycle arecomplementary to each other and help improve the overallperformance of the HCA
38TheHCA Procedure Pseudocodes 1 and 2 show the HCApseudocode
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Journal of Optimization
overall performance convergence rate robustness precisionand its ability to escape from local optima solutions [3]These empirical results can be used to compare any newalgorithm with existing optimization algorithms and to gainunderstanding of the algorithmrsquos behavior on different kindsof problem For instance multimodal functions can be usedto check the ability of an algorithm to escape local optimaand explore new regions especially when the global optimalis surrounded by many local optima Similarly flat surfacefunctions addmore difficulty to reaching global optima as theflatness does not give any indication of the direction towardsglobal optima [3]
Practically the number of variables and the domainof each variable (the function dimensionality) affect thecomplexity of the function In addition other factors such asthe number of localglobal optima points and the distributionof local optima compared with global optima are crucialissues in determining the complexity of a function especiallywhen a global minimum point is located very close to localminima points [3] Some of these functions represent real-lifeoptimization problems while others are artificial problemsFurthermore a function can be classified as unconstrainedor constrained An unconstrained function has no restrictionon the values of its variables
Due to the presence of a large variety of optimizationproblems it is hard to find a suitable algorithm that cansolve all types of optimization problems effectively On theother hand the No-Free-Lunch (NFL) theorems posit thatno particular algorithm performs better than all other algo-rithms for all problems and that what an algorithm gains inperformance on someproblems can be lost on other problems[5] Consequently we can infer that some algorithms aremore applicable and compatible with particular classes ofoptimization problems than othersTherefore there is alwaysa need to design and evaluate new algorithms for their poten-tial to improve performance on certain types of optimizationproblem For these reasons the main objective of this paperis to develop a new optimization algorithm and determineits performance on various benchmarked problems to gainunderstanding of the algorithm and evaluate its applicabilityto other unsolved problems
Many established nature-inspired algorithms especiallywater-based algorithms have proven successful in solvingCOPs Nevertheless these water-based algorithms do nottake into account the broader concepts of the hydrologicalwater cycle how the water drops and paths are formed or thefact that water is a renewable and recyclable resource Thatis previous water-based algorithms partially utilized somestages of the natural water cycle and demonstrated only someof the important aspects of a full hydrological water cycle forsolving COPs
Themain contribution of this paper is to delve deeper intohydrological theory and the fundamental concepts surround-ing water droplets to inform the design of a new optimizationalgorithmThis algorithm provides a new hydrological-basedconceptual frameworkwithin which existing and future workin water-based algorithms can be located In this paperwe present a new nature-inspired optimization algorithmcalled the Hydrological Cycle Algorithm (HCA) for solving
optimization problems HCA simulates naturersquos hydrologicalwater cycle More specifically it involves a collection of waterdrops passing through different phases such as flow (runoff)evaporation condensation and precipitation to generatea solution It can be considered as a swarm intelligenceoptimization algorithm for some parts of the cycle when acollection of water drops moves through the search spaceBut it can also be considered an evolutionary algorithmfor other parts of the cycle when information is exchangedand shared By using the full hydrological water cycle as aconceptual framework we show that previous water-basedalgorithms have predominantly only used swarm-like aspectsinspired by precipitation and flow HCA however uses allfour stages that will form a complete water-based approachto solving optimization problems efficiently In particular weshow that for certain problems HCA leads to improved per-formance and solution quality In particular we demonstratehow the condensation stage is utilized to allow informationsharing among the particles The information sharing helpsin improving the performance and reducing the number ofiterations needed to reach the optimal solution Our claimsfor HCA are to be judged not on improved efficiency but onimproved conceptual bioinspiration as well as contributionto the area of water-based algorithm However we alsoshow that HCA is competitive in comparison to severalother nature-inspired algorithms both water-based and non-water-based
In summary the paper presents a new overall conceptualframework for water-based algorithms which applies thehydrological cycle in its entirety to continuous optimizationproblems as a proof-of-concept (ie validates the full hydro-logical inspired framework)The benchmarkedmathematicaltest functions chosen in this paper are useful for evaluatingperformance characteristics of any new approach especiallythe effectiveness of exploration and exploitation processesSolving these functions (ie finding the global-optimalsolution) is challenging for most algorithms even with afew variables because their artificial landscapes may containmany local optimal solutions Our minimal criterion forsuccess is that theHCAalgorithmperformance is comparableto other well-known algorithms including those that onlypartially adopt the full HCA framework We maintain thatour results are competitive and that therefore water-inspiredresearchers and non-water-inspired researchers lose nothingby adopting the richer bioinspiration that comes with HCAin its entirety rather than adopting only parts of the fullwater cycle The HCArsquos performance is evaluated in terms offinding optimal solutions and computational effort (numberof iterations) The algorithmrsquos behavior its convergence rateand its ability to deal with multimodal functions are alsoinvestigated and analyzedThemain aimof this paper is not toprove the superiority of HCA over other algorithms Insteadthe aim is to provide researchers in science engineeringoperations research and machine learning with a new water-based tool for dealing with continuous variables in convexunconstrained and nonlinear problems Further details andfeatures of the HCA are described in Section 3
The remainder of this paper is organized as follows Sec-tion 2 discusses various related swarm intelligence techniques
Journal of Optimization 3
used to tackle COPs and distinguishes HCA from relatedalgorithms Section 3 describes the hydrological cycle innature and explains the proposedHCA Section 4 outlines theexperimental setup problem representation results obtainedand the comparative evaluation conducted with other algo-rithms Finally Section 5 presents concluding remarks andoutlines plans for further research
2 Previous Swarm Intelligence Approaches forSolving Continuous Problems
Many metaheuristic optimization algorithms have beendesigned and their performance has been evaluated onbenchmark functions Among these are nature-inspired algo-rithms which have received considerable attention Eachalgorithm uses a different approach and representation tosolve the target problem It isworthmentioning that our focusis on studies that involve solving unconstrained continuousfunctions with few variables If a new algorithm cannot dealwith these cases there is little chance of it dealing with morecomplex functions
Ant colony optimization (ACO) is one of the best-knowntechniques for solving discrete optimization problems andhas been extended to deal with COPs in many studies Forexample in [6] the authors divided the function domain intoa certain number of regions representing the trial solutionspace for the ants to explore Then they generated two typesof ants (global and local ants) and distributed them to explorethese regions The global ants were responsible for updatingthe fitness of the regions globally whereas the local ants didthe same locally The approach incorporated mutation andcrossover operations to improve the solution quality
Socha and Dorigo [7] extended the ACO algorithm totackle continuous domain problemsThe extension facilitatedconsideration of continuous probability instead of discreteprobability and incorporated the Gaussian probability den-sity function to choose the next point The pheromoneevaporation procedure was modified and old solutions wereremoved from the candidate solutions pool and replacedwith new and better solutions A weight associated with eachant solution represented the fitness of the solution OtherACO inspired approaches with various implementationsthat do not follow the original ACO algorithm structurehave also been developed to solve COPs [8ndash11] In furtherwork the performance of ACO was evaluated on continuousfunction optimization [12] One of the problems with ACOis that reinforcement of paths is the only method for sharinginformation between the ants in an indirect way A heavilypheromone-reinforced path is naturally considered a goodpath for other ants to follow No other mechanism existsto allow ants to share information for exploration purposesexcept indirectly through the evaporation of pheromone overtime Exploration diminishes over time with ACO whichsuits problems where there may be only one global solutionBut if the aim is to find a number of alternative solutionsthe rate of evaporation has to be strictly controlled to ensurethat no single path dominates Evaporation control can leadto other side-effects in the classic ACO paradigm such aslonger convergence time resulting in the need to add possibly
nonintuitive information sharingmechanisms such as globalupdates for exploration purposes
Although strictly speaking they are not a swarm intel-ligence approach genetic algorithms (GAs) [13] have beenapplied to many COPs A GA uses simple operations suchas crossover and mutation to manipulate and direct thepopulation and to help escape from local optima The valuesof the variables are encoded using binary or real numbers[14] However as with other algorithms GA performance candepend critically on the initial populationrsquos values which areusually random This was clarified by Maaranen et al [15]who investigated the importance and effect of the initial pop-ulation values on the quality of the final solution In [16] theauthors also focused on the choice of the initial populationvalues and modified the algorithm to intensify the searchin the most promising area of the solution space The mainproblem with standard GAs for optimization continues tobe the lack of ldquolong-termismrdquo where fitness for exploitationmay need to be secondary to allow for adequate explorationto assess the problem space for alternative solutions Onepossible way to deal with this problem is to hybridize the GAwith other approaches such as in [17] where a hybrid GA andparticle swarm optimization (PSO) is proposed for solvingmultimodal functions
James and Russell [18] examined the performance of thePSO algorithm on some continuous optimization functionsLater researchers such as Chen et al [19] modified the PSOalgorithm to help overcome the problem of premature con-vergence Additionally PSO is used for global optimizationin [20] The PSO is also hybridized with ACO to improve theperformance when dealing with high dimension multimodalfunctions [21] Further a comparative study between GA andPSO on numerical benchmark problems can be found in[22] Other versions of the PSO algorithmwith crossover andmutation operators to improve the performance have alsobeen proposed such as in [23] Once such GA operators areintroduced the question arises as to what advantages a PSOapproach has over a standard evolutionary approach to thesame problem
The bees algorithm (BA) has been used to solve a numberof benchmark functions [24] This algorithm simulates thebehavior of swarms of honey bees during food foragingIn BA scout bees are sent to search for food sources bymoving randomly from one place to another On returningto the hive they share the information with the other beesabout location distance and quality of the food sourceThe algorithm produces good results for several benchmarkfunctions However the information sharing is direct andconfined to subsets of bees leading to the possibility ofconvergence to a local optimum or divergence away from theglobal optimum
The grenade explosion method (GSM) is an evolutionaryalgorithmwhich has been used to optimize real-valued prob-lems [25] The GSM is based on the mechanism associatedwith a grenade explosion which is intended to overcomeproblems associated with ldquocrowdingrdquo where candidate solu-tions have converged to a local minimum When a grenadeexplodes shrapnel pieces are thrown to destroy objects in thevicinity of the explosion In the GSM losses are calculated
4 Journal of Optimization
1 2 i
0
1 1
0 0
1 1
0
Mtimes Pi + 1 middot middot middot
Figure 1 Representation of a continuous problem in the IWD algorithm
for each piece of shrapnel and the effect of each pieceof shrapnel is calculated The highest loss effect representsvaluable objects existing in that area Then another grenadeis thrown to the area to cause greater loss and so onOverall the GSM was able to find the global minimum fasterthan other algorithms in seven out of eight benchmarkedtests However the algorithm requires the maintenance ofnonintuitive territory radius preventing shrapnel pieces fromcoming too close to each other and forcing shrapnel tospread uniformly over the search space Also required is anexplosion range for the shrapnel These control parameterswork well with problem spaces containingmany local optimaso that global optima can be found after suitable explorationHowever the relationship between these control parametersand problem spaces of unknown topology is not clear
21 Water-Based Algorithms Algorithms inspired by waterflow have become increasingly common for tackling opti-mization problems Two of the best-known examples areintelligent water drops (IWD) and the water cycle algorithm(WCA) These algorithms have been shown to be successfulin many applications including various COPs The IWDalgorithm is a swarm-based algorithm inspired by the groundflow of water in nature and the actions and reactions thatoccur during that process [26] In IWD a water drop hastwo properties movement velocity and the amount of soilit carries These properties are changed and updated foreach water drop throughout its movement from one pointto another The velocity of a water drop is affected anddetermined only by the amount of soil between two pointswhile the amount of soil it carries is equal to the amount ofsoil being removed from a path [26]
The IWD has also been used to solve COPs Shah-Hosseini [27] utilized binary variable values in the IWDand created a directed graph of (119872 times 119875) nodes in which119872 identifies the number of variables and 119875 identifies theprecision (number of bits for each variable) which wasassumed to be 32 The graph comprised 2 times (119872times 119875) directededges connecting nodes with each edge having a value ofzero or one as depicted in Figure 1 One of the advantagesof this representation is that it is intuitively consistent withthe natural idea of water flowing in a specific direction
Also the IWD algorithm was augmented with amutation-based local search to enhance its solution qualityIn this process an edge is selected at random from a solutionand replaced by another edge which is kept if it improvesthe fitness value and the process was repeated 100 times foreach solution This mutation is applied to all the generatedsolutions and then the soil is updated on the edges belongingto the best solution However the variablesrsquo values have tobe converted from binary to real numbers to be evaluated
Also once mutation (but not crossover) is introduced intoIWD the question arises as what advantages IWD has overstandard evolutionary algorithms that use simpler notationsfor representing continuous numbers Detailed differencesbetween IWD and our HCA are described in Section 39
Various researchers have applied the WCA to COPs [2829]TheWCA is also based on the groundwater flow throughrivers and streams to the sea The algorithm constructs asolution using a population of streams where each stream isa candidate solution It initially generates random solutionsfor the problem by distributing the streams into differentpositions according to the problemrsquos dimensions The beststream is considered a sea and a specific number of streamsare considered rivers Then the streams are made to flowtowards the positions of the rivers and sea The position ofthe sea indicates the global-best solution whereas those of therivers indicate the local best solution points In each iterationthe position of each stream may be swapped with any ofthe river positions or (counterintuitively) the sea positionaccording to the fitness of that stream Therefore if a streamsolution is better than that of a river that stream becomesa river and the corresponding river becomes a stream Thesame process occurs between rivers and sea Evaporationoccurs in the riversstreams when the distance between ariverstream and the sea is very small (close to zero) Fol-lowing evaporation new streams are generated with differentpositions in a process that can be described as rain Theseprocesses help the algorithm to escape from local optima
WCA treats streams rivers and the sea as set of movingpoints in a multidimensional space and does not considerstreamriver formation (movements of water and soils)Therefore the overall structure of the WCA is quite similarto that of the PSO algorithm For instance the PSO algorithmhas Gbest and Pbest points that indicate the global and localsolutions with the Gbest point guiding the Pbest points toconverge to that point In a similar manner the WCA has anumber of rivers that represent the local optimal solutionsand a sea position represents the global-optimal solutionThesea is used as a guide for the streams and rivers Moreoverin PSO a particle at the Pbest point updates its positiontowards the Gbest point Analogously the WCA exchangesthe positions of rivers with that of the sea when the fitness ofthe river is better The main difference is the consideration ofthe evaporation and raining stages that help the algorithm toescape from local optima
An improved version ofWCAcalled dual-system (DSWCA)has been presented for solving constrained engineeringoptimization problems [30] The DSWCA improvementsinvolved adding two cycles (outer and inner)The outer cycleis in accordance with the WCA and is designed to performexploration The inner cycle which is based on the ocean
Journal of Optimization 5
cycle was designed as an exploitation processThe confluencebetween rivers process is also included to improve conver-gence speed The improvements aim to help the originalWCA to avoid falling into the local optimal solutions TheDSWCA was able to provide better results than WCA In afurther extension of WCA Heidari et al proposed a chaoticWCA (CWCA) that incorporates thirteen chaotic patternswith WCA movement process to improve WCArsquos perfor-mance and deal with the premature convergence problem[31] The experimental results demonstrated improvement incontrolling the premature convergence problem HoweverWCA and its extensions introduce many additional featuresmany of which are not compatible with nature-inspiration toimprove performance andmakeWCA competitive with non-water-based algorithms Detailed differences between WCAand our HCA are described in Section 39
The review of previous approaches shows that nature-inspired swarm approaches tend to introduce evolutionaryconcepts of mutation and crossover for sharing informa-tion to adopt nonplausible global data structures to pre-vent convergence to local optima or add more central-ized control parameters to preserve the balance betweenexploration and exploitation in increasingly difficult problemdomains thereby compromising self-organization Whensuch approaches are modified to deal with COPs complexand unnatural representations of numbers may also berequired that add to the overheads of the algorithm as well asleading towhatmay appear to be ldquoforcedrdquo and unnatural waysof dealing with the complexities of a continuous problem
In swarm algorithms a number of cooperative homoge-nous entities explore and interact with the environment toachieve a goal which is usually to find the global-optimalsolution to a problem through exploration and exploita-tion Cooperation can be accomplished by some form ofcommunication that allows the swarm members to shareexploration and exploitation information to produce high-quality solutions [32 33] Exploration aims to acquire asmuch information as possible about promising solutionswhile exploitation aims to improve such promising solutionsControlling the balance between exploration and exploitationis usually considered critical when searching formore refinedsolutions as well as more diverse solutions Also important isthe amount of exploration and exploitation information to beshared and when
Information sharing in nature-inspired algorithms usu-ally includes metrics for evaluating the usefulness of infor-mation and mechanisms for how it should be shared Inparticular these metrics and mechanisms should not containmore knowledge or require more intelligence than can beplausibly assigned to the swarm entities where the emphasisis on emergence of complex behavior from relatively simpleentities interacting in nonintelligent ways with each otherInformation sharing can be done either randomly or non-randomly among entities Different approaches are used toshare information such as direct or indirect interaction Twosharing models are one-to-one in which explicit interactionoccurs between entities and many-to-many (broadcasting)in which interaction occurs indirectly between the entitiesthrough the environment [34]
Usually either direct or indirect interaction betweenentities is employed in an algorithm for information sharingand the use of both types in the same algorithm is oftenneglected For instance the ACO algorithm uses indirectcommunication for information sharing where ants layamounts of pheromone on paths depending on the usefulnessof what they have exploredThemore promising the path themore the pheromone added leading other ants to exploitingthis path further Pheromone is not directed at any other antin particular In the artificial bee colony (ABC) algorithmbees share information directly (specifically the directionand distance to flowers) in a kind of waggle dancing [35]The information received by other bees depends on whichbee they observe and not the information gathered from allbees In a GA the information exchange occurs directly inthe form of crossover operations between selected members(chromosomes and genes) of the population The qualityof information shared depends on the members chosenand there is no overall store of information available to allmembers In PSO on the other hand the particles havetheir own information as well as access to global informationto guide their exploitation This can lead to competitiveand cooperative optimization techniques [36] However noattempt is made in standard PSO to share informationpossessed by individual particles This lack of individual-to-individual communication can lead to early convergence ofthe entire population of particles to a nonoptimal solutionSelective information sharing between individual particleswill not always avoid this narrow exploitation problem butif undertaken properly could allow the particles to findalternative andmore diverse solution spaces where the globaloptimum lies
As can be seen from the above discussion the distinctionsbetween communication (direct and indirect) and informa-tion sharing (random or nonrandom) can become blurredIn HCA we utilize both direct and indirect communicationto share information among selected members of the wholepopulation Direct communication occurs in the condensa-tion stage of the water cycle where some water drops collidewith each other when they evaporate into the cloud Onthe other hand indirect communication occurs between thewater drops and the environment via the soil and path depthUtilizing information sharing helps to diversify the searchspace and improve the overall performance through betterexploitation In particular we show howHCAwith direct andindirect communication suits continuous problems whereindividual particles share useful information directly whensearching a continuous space
Furthermore in some algorithms indirect communica-tion is used not just to find a possible solution but alsoto reinforce an already found promising solution Suchreinforcement may lead the algorithm to getting stuck in thesame solution which is known as stagnation behavior [37]Such reinforcement can also cause a premature convergencein some problems An important feature used in HCA toavoid this problem is the path depth The depths of paths areconstantly changing where deep paths will have less chanceof being chosen compared with shallow paths More detailswill be provided about this feature in Section 33
6 Journal of Optimization
In summary this paper aims to introduce a novel nature-inspired approach for dealing with COPs which also intro-duces a continuous number representation that is naturalfor the algorithm The cyclic nature of the algorithm con-tributes to self-organization as the system converges to thesolution through direct and indirect communication betweenparticles feedback from the environment and internal self-evaluation Finally our HCA addresses some of the weak-nesses of other similar algorithms HCA is inspired by the fullwater cycle not parts of it as exemplified by IWD andWCA
3 The Hydrological Cycle Algorithm
Thewater cycle also known as the hydrologic cycle describesthe continuous movement of water in nature [38] It isrenewable because water particles are constantly circulatingfrom the land to the sky and vice versaTherefore the amountof water remains constant Water drops move from one placeto another by natural physical processes such as evaporationprecipitation condensation and runoff In these processesthe water passes through different states
31 Inspiration The water cycle starts when surface watergets heated by the sun causingwater fromdifferent sources tochange into vapor and evaporate into the air Then the watervapor cools and condenses as it rises to become drops Thesedrops combine and collect with each other and eventuallybecome so saturated and dense that they fall back becauseof the force of gravity in a process called precipitation Theresulting water flows on the surface of the Earth into pondslakes or oceans where it again evaporates back into theatmosphere Then the entire cycle is repeated
In the flow (runoff) stage the water drops start movingfrom one location to another based on Earthrsquos gravity and theground topography When the water is scattered it chooses apathwith less soil and fewer obstacles Assuming no obstaclesexist water drops will take the shortest path towards thecenter of the Earth (ie oceans) owing to the force of gravityWater drops have force because of this gravity and this forcecauses changes in motion depending on the topology Asthe water flows in rivers and streams it picks up sedimentsand transports them away from their original locationTheseprocesses are known as erosion and deposition They help tocreate the topography of the ground by making new pathswith more soil removal or the fading of others as theybecome less likely to be chosen owing to too much soildeposition These processes are highly dependent on watervelocity For instance a river is continually picking up anddropping off sediments from one point to another Whenthe river flow is fast soil particles are picked up and theopposite happens when the river flow is slowMoreover thereis a strong relationship between the water velocity and thesuspension load (the amount of dissolved soil in the water)that it currently carries The water velocity is higher whenonly a small amount of soil is being carried and lower whenlarger amounts are carried
In addition the term ldquoflow raterdquo or ldquodischargerdquo can bedefined as the volume of water in a river passing a defined
point over a specific period Mathematically flow rate iscalculated based on the following equation [39]119876 = 119881 times 119860 997904rArr[119876 = 119881 times (119882 times 119863)] (3)
where119876 is flow rate119860 is cross-sectional area119881 is velocity119882is width and119863 is depthThe cross-sectional area is calculatedby multiplying the depth by the width of the stream Thevelocity of the flowing water is defined as the quantity ofdischarge divided by the cross-sectional areaTherefore thereis an inverse relationship between the velocity and the cross-sectional area [40] The velocity increases when the cross-sectional area is small and vice versa If we assume that theriver has a constant flow rate (velocity times cross-sectional area= constant) and that the width of the river is fixed then wecan conclude that the velocity is inversely proportional to thedepth of the river in which case the velocity decreases in deepwater and vice versa Therefore the amount of soil depositedincreases as the velocity decreases This explains why less soilis taken from deep water and more soil is taken from shallowwater A deep river will have water moving more slowly thana shallow river
In addition to river depth and force of gravity there areother factors that affect the flow velocity of water drops andcause them to accelerate or decelerate These factors includeobstructions amount of soil existing in the path topographyof the land variations in channel cross-sectional area and theamount of dissolved soil being carried (the suspension load)
Water evaporation increases with temperature withmax-imum evaporation at 100∘C (boiling point) Converselycondensation increases when the water vapor cools towards0∘CThe evaporation and condensation stages play importantroles in the water cycle Evaporation is necessary to ensurethat the system remains in balance In addition the evapo-ration process helps to decrease the temperature and keepsthe air moist Condensation is a very important process asit completes the water cycle When water vapor condensesit releases the same amount of thermal energy into theenvironment that was needed to make it a vapor Howeverin nature it is difficult to measure the precise evaporationrate owing to the existence of different sources of evaporationhence estimation techniques are required [38]
The condensation process starts when the water vaporrises into the sky then as the temperature decreases inthe higher layer of the atmosphere the particles with lesstemperature have more chance to condense [41] Figure 2(a)illustrates the situation where there are no attractions (orconnections) between the particles at high temperature Asthe temperature decreases the particles collide and agglom-erate to form small clusters leading to clouds as shown inFigure 2(b)
An interesting phenomenon in which some of the largewater drops may eliminate other smaller water drops occursduring the condensation process as some of the waterdropsmdashknown as collectorsmdashfall from a higher layer to alower layer in the atmosphere (in the cloud) Figure 3 depictsthe action of a water drop falling and colliding with smaller
Journal of Optimization 7
(a) Hot water (b) Cold water
Figure 2 Illustration of the effect of temperature on water drops
Figure 3 A collector water drop in action
water drops Consequently the water drop grows as a resultof coalescence with the other water drops [42]
Based on this phenomenon only water drops that aresufficiently large will survive the trip to the ground Of thenumerous water drops in the cloud only a portion will makeit to the ground
32 The Proposed Algorithm Given the brief description ofthe hydrological cycle above the IWD algorithm can beinterpreted as covering only one stage of the overall watercyclemdashthe flow stage Although the IWDalgorithm takes intoaccount some of the characteristics and factors that affectthe flowing water drops through rivers and the actions andreactions along the way the IWD algorithm neglects otherfactors that could be useful in solving optimization problemsthrough adoption of other key concepts taken from the fullhydrological process
Studying the full hydrological water cycle gave us theinspiration to design a new and potentially more powerfulalgorithm within which the original IWD algorithm can beconsidered a subpart We divide our algorithm into fourmain stages Flow (Runoff) Evaporation Condensation andPrecipitation
The HCA can be described formally as consisting of a setof artificial water drops (WDs) such that
HCA = WD1WD2WD3 WD119899 where 119899 ge 1 (4)
The characteristics associated with each water drop are asfollows
(i) 119881WD the velocity of the water drop
(ii) SoilWD the amount of soil the water drop carries(iii) 120595WD the solution quality of the water drop
The input to the HCA is a graph representation of a problemsrsquosolution spaceThe graph can be a fully connected undirectedgraph119866 = (119873 119864) where119873 is the set of nodes and 119864 is the setof edges between the nodes In order to find a solution thewater drops are distributed randomly over the nodes of thegraph and then traverse the graph (119866) according to variousrules via the edges (119864) searching for the best or optimalsolutionThe characteristics associated with each edge are theinitial amount of soil on the edge and the edge depth
The initial amount of soil is the same for all the edgesThe depth measures the distance from the water surface tothe riverbed and it can be calculated by dividing the lengthof the edge by the amount of soil Therefore the depth variesand depends on the amount of soil that exists on the pathand its length The depth of the path increases when moresoil is removed over the same length A deep path can beinterpreted as either the path being lengthy or there being lesssoil Figures 4 and 5 illustrate the relationship between pathdepth soil and path length
The HCA goes through a number of cycles and iterationsto find a solution to a problem One iteration is consideredcomplete when all water drops have generated solutionsbased on the problem constraints A water drop iterativelyconstructs a solution for the problemby continuouslymovingbetween the nodes Each iteration consists of specific steps(which will be explained below) On the other hand a cyclerepresents a varied number of iterations A cycle is consideredas being complete when the temperature reaches a specificvalue which makes the water drops evaporate condenseand precipitate again This procedure continues until thetermination condition is met
33 Flow Stage (Runoff) This stage represents the con-struction stage where the HCA explores the search spaceThis is carried out by allowing the water drops to flow(scattered or regrouped) in different directions and con-structing various solutions Each water drop constructs a
8 Journal of Optimization
Depth = 101000= 001
Soil = 500
Length = 10
Depth = 20500 = 004
Soil = 500
Length = 20
Figure 4 Depth increases with length increases for the same amount of soil
Depth = 10500= 002
Soil = 500
Length = 10
Soil = 1000
Length = 10
Depth = 101000= 001
Figure 5 Depth increases when the amount of soil is reduced over the same length
solution incrementally by moving from one point to anotherWhenever a water drop wants to move towards a new nodeit has to choose between different numbers of nodes (variousbranches) It calculates the probability of all the unvisitednodes and chooses the highest probability node taking intoconsideration the problem constraints This can be describedas a state transition rule that determines the next node tovisit In HCA the node with the highest probability will beselected The probability of the node is calculated using119875WD
119894 (119895)= 119891 (Soil (119894 119895))2 times 119892 (Depth (119894 119895))sum119896notinV119888(WD) (119891 (Soil (119894 119896))2 times 119892 (Depth (119894 119896))) (5)
where 119875WD119894(119895) is the probability of choosing node 119895 from
node 119894 119891(Soil(119894 119895)) is equal to the inverse of the soil between119894 and 119895 and is calculated using119891 (Soil (119894 119895)) = 1120576 + Soil (119894 119895) (6)
120576 = 001 is a small value that is used to prevent division byzero The second factor of the transition rule is the inverse ofdepth which is calculated based on119892 (Depth (119894 119895)) = 1
Depth (119894 119895) (7)
Depth(119894 119895) is the depth between nodes 119894 and 119895 and iscalculated by dividing the length of the path by the amountof soil This can be expressed as follows
Depth (119894 119895) = Length (119894 119895)Soil (119894 119895) (8)
The depth value can be normalized to be in the range [1ndash100]as it might be very small The depth of the path is constantlychanging because it is influenced by the amount of existing
S
A
B
S
A
B
S
A
B
Figure 6 The relationship between soil and depth over the samelength
soil where the depth is inversely proportional to the amountof the existing soil in the path of a fixed length Waterdrops will choose paths with less depth over other pathsThis enables exploration of new paths and diversifies thegenerated solutions Assume the following scenario (depictedby Figure 6) where water drops have to choose betweennodes 119860 or 119861 after node 119878 Initially both edges have the samelength and same amount of soil (line thickness represents soilamount) Consequently the depth values for the two edgeswill be the sameAfter somewater drops chose node119860 to visitthe edge (S-A) as a result has less soil and is deeper Accordingto the new state transition rule the next water drops will beforced to choose node 119861 because edge (S-B) will be shallowerthan (S-A)This technique provides a way to avoid stagnationand explore the search space efficiently
331 VelocityUpdate After selecting the next node thewaterdrop moves to the selected node and marks it as visited Eachwater drop has a variable velocity (V) The velocity of a waterdropmight be increased or decreasedwhile it ismoving basedon the factors mentioned above (the amount of soil existingon the path the depth of the path etc) Mathematically thevelocity of a water drop at time (119905 + 1) can be calculated using
119881WD119905+1 = [119870 times 119881WD
119905 ] + 120572( 119881WD
Soil (119894 119895)) + 2radic 119881WD
SoilWD
+ ( 100120595WD) + 2radic 119881WD
Depth (119894 119895) (9)
Journal of Optimization 9
Equation (9) defines how the water drop updates its velocityafter each movement Each part of the equation has an effecton the new velocity of the water drop where 119881WD
119905 is thecurrent water drop velocity and 119870 is a uniformly distributedrandom number between [0 1] that refers to the roughnesscoefficient The roughness coefficient represents factors thatmay cause the water drop velocity to decrease Owing todifficulties measuring the exact effect of these factors somerandomness is considered with the velocity
The second term of the expression in (9) reflects therelationship between the amount of soil existing on a path andthe velocity of a water drop The velocity of the water dropsincreases when they traverse paths with less soil Alpha (120572)is a relative influence coefficient that emphasizes this part in
the velocity update equationThe third term of the expressionrepresents the relationship between the amount of soil thata water drop is currently carrying and its velocity Waterdrop velocity increases with decreasing amount of soil beingcarriedThis gives weakwater drops a chance to increase theirvelocity as they do not holdmuch soilThe fourth term of theexpression refers to the inverse ratio of a water droprsquos fitnesswith velocity increasing with higher fitness The final term ofthe expression indicates that a water drop will be slower in adeep path than in a shallow path
332 Soil Update Next the amount of soil existing on thepath and the depth of that path are updated A water dropcan remove (or add) soil from (or to) a path while movingbased on its velocity This is expressed by
Soil (119894 119895) = [119875119873 lowast Soil (119894 119895)] minus ΔSoil (119894 119895) minus 2radic 1
Depth (119894 119895) if 119881WD ge Avg (all119881WDS) (Erosion)[119875119873 lowast Soil (119894 119895)] + ΔSoil (119894 119895) + 2radic 1
Depth (119894 119895) else (Deposition) (10)
119875119873 represents a coefficient (ie sediment transport rate orgradation coefficient) that may affect the reduction in theamount of soil The soil can be removed only if the currentwater drop velocity is greater than the average of all otherwater drops Otherwise an amount of soil is added (soildeposition) Consequently the water drop is able to transferan amount of soil from one place to another and usuallytransfers it from fast to slow parts of the path The increasingsoil amount on some paths favors the exploration of otherpaths during the search process and avoids entrapment inlocal optimal solutions The amount of soil existing betweennode 119894 and node 119895 is calculated and changed usingΔSoil (119894 119895) = 1
timeWD119894119895
(11)
such that
timeWD119894119895 = Distance (119894 119895)119881WD
119905+1
(12)
Equation (11) shows that the amount of soil being removedis inversely proportional to time Based on the second lawof motion time is equal to distance divided by velocityTherefore time is proportional to velocity and inverselyproportional to path length Furthermore the amount ofsoil being removed or added is inversely proportional to thedepth of the pathThis is because shallow soils will be easy toremove compared to deep soilsTherefore the amount of soilbeing removed will decrease as the path depth increasesThisfactor facilitates exploration of new paths because less soil isremoved from the deep paths as they have been used manytimes before This may extend the time needed to search forand reach optimal solutionsThedepth of the path needs to beupdatedwhen the amount of soil existing on the path changesusing (8)
333 Soil Transportation Update In the HCA we considerthe fact that the amount of soil a water drop carries reflects itssolution qualityThis can be done by associating the quality ofthe water droprsquos solution with its carrying soil value Figure 7illustrates the fact that a water drop with more carrying soilhas a better solution
To simulate this association the amount of soil that thewater drop is carrying is updated with a portion of the soilbeing removed from the path divided by the last solutionquality found by that water drop Therefore the water dropwith a better solution will carry more soil This can beexpressed as follows
SoilWD = SoilWD + ΔSoil (119894 119895)120595WD (13)
The idea behind this step is to let a water drop preserves itsprevious fitness value Later the amount of carrying soil valueis used in updating the velocity of the water drops
334 Temperature Update The final step that occurs at thisstage is updating of the temperature The new temperaturevalue depends on the solution quality generated by the waterdrops in the previous iterations The temperature will beupdated as follows
Temp (119905 + 1) = Temp (119905) + +ΔTemp (14)
where
ΔTemp = 120573 lowast (Temp (119905)Δ119863 ) Δ119863 gt 0Temp (119905)10 otherwise
(15)
10 Journal of Optimization
Better
Figure 7 Relationship between the amount of carrying soil and itsquality
and where coefficient 120573 is determined based on the problemThe difference (Δ119863) is calculated based onΔ119863 = MaxValue minusMinValue (16)
such that
MaxValue = max [Solutions Quality (WDs)] MinValue = min [Solutions Quality (WDs)] (17)
According to (16) increase in temperature will be affected bythe difference between the best solution (MinValue) and theworst solution (MaxValue) Less difference means that therewas not a lot of improvement in the last few iterations and asa result the temperature will increase more This means thatthere is no need to evaporate the water drops as long as theycan find different solutions in each iteration The flow stagewill repeat a few times until the temperature reaches a certainvalue When the temperature is sufficient the evaporationstage begins
34 Evaporation Stage In this stage the number of waterdrops that will evaporate (the evaporation rate) when thetemperature reaches a specific value is first determined Theevaporation rate is chosen randomly between one and thetotal number of water drops
ER = Random (1119873) (18)
where ER is evaporation rate and119873 is number ofwater dropsAccording to the evaporation rate a specific number of waterdrops is selected to evaporateThe selection is done using theroulette-wheel selection method taking into considerationthe solution quality of each water drop Only the selectedwater drops evaporate and go to the next stage
35 Condensation Stage As the temperature decreases waterdrops draw closer to each other and then collide and combineor bounce off These operations are useful as they allowthe water drops to be in direct physical contact with eachother This stage represents the collective and cooperativebehavior between all the water drops A water drop cangenerate a solution without direct communication (by itself)however communication between water drops may lead tothe generation of better solutions in the next cycle In HCAphysical contact is used for direct communication betweenwater drops whereas the amount of soil on the edges canbe considered indirect communication between the water
drops Direct communication (information exchange) hasbeen proven to improve solution quality significantly whenapplied by other algorithms [37]
The condensation stage is considered to be a problem-dependent stage that can be redesigned to fit the problemspecification For example one way of implementing thisstage to fit the Travelling Salesman Problem (TSP) is to uselocal improvement methods Using these methods enhancesthe solution quality and generates better results in the nextcycle (ie minimizes the total cost of the solution) with thehypothesis that it will reduce the number of iterations neededand help to reach the optimalnear-optimal solution fasterEquation (19) shows the evaporatedwater (EW) selected fromthe overall population where 119899 represents the evaporationrate (number of evaporated water drops)
EW = WD1WD2 WD119899 (19)
Initially all the possible combinations (as pairs) are selectedfrom the evaporated water drops to share their informationwith each other via collision When two water drops collidethere is a chance either of the two water drops merging(coalescence) or of them bouncing off each other In naturedetermining which operation will take place is based onfactors such as the temperature and the velocity of each waterdrop In this research we considered the similarity betweenthe water dropsrsquo solutions to determine which process willoccur When the similarity is greater than or equal to 50between two water dropsrsquo solutions they merge Otherwisethey collide and bounce off each other Mathematically thiscan be represented as follows
OP (WD1WD2)= Bounce (WD1WD2) Similarity lt 50
Merge (WD1WD2) Similarity ge 50 (20)
We use the Hamming distance [43] to count the number ofdifferent positions in two series that have the same length toobtain a measure of the similarity between water drops Forexample the Hamming distance between 12468 and 13458 istwoWhen twowater drops collide andmerge onewater drop(ie the collector) will becomemore powerful by eliminatingthe other one in the process also acquiring the characteristicsof the eliminated water drop (ie its velocity)
WD1Vel = WD1Vel +WD2Vel (21)
On the other hand when two water drops collide and bounceoff they will share information with each other about thegoodness of each node and how much a node contributes totheir solutions The bounce-off operation generates informa-tion that is used later to refine the quality of the water dropsrsquosolutions in the next cycle The information is available to allwater drops and helps them to choose a node that has a bettercontribution from all the possible nodes at the flow stageTheinformation consists of the weight of each node The weightmeasures a node occurrence within a solution over the waterdrop solution qualityThe idea behind sharing these details is
Journal of Optimization 11
to emphasize the best nodes found up to that point Withinthis exchange the water drops will favor those nodes in thenext cycle Mathematically the weight can be calculated asfollows
Weight (node) = NodeoccurrenceWDsol
(22)
Finally this stage is used to update the global-best solutionfound up to that point In addition at this stage thetemperature is reduced by 50∘C The lowering and raisingof the temperature helps to prevent the water drops fromsticking with the same solution every iteration
36 Precipitation Stage This stage is considered the termina-tion stage of the cycle as the algorithm has to check whetherthe termination condition is met If the condition has beenmet the algorithm stops with the last global-best solutionOtherwise this stage is responsible for reinitializing all thedynamic variables such as the amount of the soil on eachedge depth of paths the velocity of each water drop and theamount of soil it holds The reinitialization of the parametershelps the algorithm to avoid being trapped in local optimawhich may affect the algorithmrsquos performance in the nextcycle Moreover this stage is considered as a reinforcementstage which is used to place emphasis on the best water drop(the collector) This is achieved by reducing the amount ofsoil on the edges that belong to the best water drop solution
Soil (119894 119895) = 09 lowast soil (119894 119895) forall (119894 119895) isin BestWD (23)
The idea behind that is to favor these edges over the otheredges in the next cycle
37 Summarization of HCA Characteristics TheHCA can bedistinguished from other swarm algorithms in the followingways
(1) HCA involves a number of cycles and number ofiterations (multiple iterations per cycle)This gives theadvantage of performing certain tasks (eg solutionimprovements methods) every cycle instead of everyiteration to reduce the computational effort In addi-tion the cyclic nature of the algorithm contributesto self-organization as the system converges to thesolution through feedback from the environmentand internal self-evaluationThe temperature variablecontrols the cycle and its value is updated accordingto the quality of the solution obtained in everyiteration
(2) The flow of the water drops is controlled by pathsdepth and soil amount heuristics (indirect commu-nication) Paths depth factor affects the movement ofwater drops and helps to avoid choosing the samepaths by all water drops
(3) Water drops are able to gain or lose portion of theirvelocity This idea supports the competition betweenwater drops and prevents the sweep of one drop forthe rest of the drops because a high-velocity water
HCA procedure(i) Problem input(ii) Initialization of parameters(iii)While (termination condition is not met)
Cycle = Cycle + 1 Flow stageRepeat
(a) Iteration = iteration + 1(b) For each water drop
(1) Choose next node (Eqs (5)ndash(8))(2) Update velocity (Eq (9))(3) Update soil and depth (Eqs (10)-(11))(4) Update carrying soil (Eq (13))
(c) End for(d) Calculate the solution fitness(e) Update local optimal solution(f) Update Temperature (Eqs (14)ndash(17))
Until (Temperature = Evaporation Temperature)Evaporation (WDs)Condensation (WDs)Precipitation (WDs)
(iv) End while
Pseudocode 1 HCA pseudocode
drop can carve more paths (ie remove more soils)than slower one
(4) Soil can be deposited and removed which helpsto avoid a premature convergence and stimulatesthe spread of drops in different directions (moreexploration)
(5) Condensation stage is utilized to perform informa-tion sharing (direct communication) among thewaterdrops Moreover selecting a collector water droprepresents an elitism technique and that helps toperform exploitation for the good solutions Furtherthis stage can be used to improve the quality of theobtained solutions to reduce the number of iterations
(6) The evaporation process is a selection technique andhelps the algorithm to escape from local optimasolutionsThe evaporation happenswhen there are noimprovements in the quality of the obtained solutions
(7) Precipitation stage is used to reinitialize variables andredistribute WDs in the search space to generate newsolutions
Condensation evaporation and precipitation ((5) (6) and(7)) above distinguish HCA from other water-based algo-rithms including IWD and WCA These characteristics areimportant and help make a proper balance between explo-ration exploitation which helps in convergence towards theglobal solution In addition the stages of the water cycle arecomplementary to each other and help improve the overallperformance of the HCA
38TheHCA Procedure Pseudocodes 1 and 2 show the HCApseudocode
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 3
used to tackle COPs and distinguishes HCA from relatedalgorithms Section 3 describes the hydrological cycle innature and explains the proposedHCA Section 4 outlines theexperimental setup problem representation results obtainedand the comparative evaluation conducted with other algo-rithms Finally Section 5 presents concluding remarks andoutlines plans for further research
2 Previous Swarm Intelligence Approaches forSolving Continuous Problems
Many metaheuristic optimization algorithms have beendesigned and their performance has been evaluated onbenchmark functions Among these are nature-inspired algo-rithms which have received considerable attention Eachalgorithm uses a different approach and representation tosolve the target problem It isworthmentioning that our focusis on studies that involve solving unconstrained continuousfunctions with few variables If a new algorithm cannot dealwith these cases there is little chance of it dealing with morecomplex functions
Ant colony optimization (ACO) is one of the best-knowntechniques for solving discrete optimization problems andhas been extended to deal with COPs in many studies Forexample in [6] the authors divided the function domain intoa certain number of regions representing the trial solutionspace for the ants to explore Then they generated two typesof ants (global and local ants) and distributed them to explorethese regions The global ants were responsible for updatingthe fitness of the regions globally whereas the local ants didthe same locally The approach incorporated mutation andcrossover operations to improve the solution quality
Socha and Dorigo [7] extended the ACO algorithm totackle continuous domain problemsThe extension facilitatedconsideration of continuous probability instead of discreteprobability and incorporated the Gaussian probability den-sity function to choose the next point The pheromoneevaporation procedure was modified and old solutions wereremoved from the candidate solutions pool and replacedwith new and better solutions A weight associated with eachant solution represented the fitness of the solution OtherACO inspired approaches with various implementationsthat do not follow the original ACO algorithm structurehave also been developed to solve COPs [8ndash11] In furtherwork the performance of ACO was evaluated on continuousfunction optimization [12] One of the problems with ACOis that reinforcement of paths is the only method for sharinginformation between the ants in an indirect way A heavilypheromone-reinforced path is naturally considered a goodpath for other ants to follow No other mechanism existsto allow ants to share information for exploration purposesexcept indirectly through the evaporation of pheromone overtime Exploration diminishes over time with ACO whichsuits problems where there may be only one global solutionBut if the aim is to find a number of alternative solutionsthe rate of evaporation has to be strictly controlled to ensurethat no single path dominates Evaporation control can leadto other side-effects in the classic ACO paradigm such aslonger convergence time resulting in the need to add possibly
nonintuitive information sharingmechanisms such as globalupdates for exploration purposes
Although strictly speaking they are not a swarm intel-ligence approach genetic algorithms (GAs) [13] have beenapplied to many COPs A GA uses simple operations suchas crossover and mutation to manipulate and direct thepopulation and to help escape from local optima The valuesof the variables are encoded using binary or real numbers[14] However as with other algorithms GA performance candepend critically on the initial populationrsquos values which areusually random This was clarified by Maaranen et al [15]who investigated the importance and effect of the initial pop-ulation values on the quality of the final solution In [16] theauthors also focused on the choice of the initial populationvalues and modified the algorithm to intensify the searchin the most promising area of the solution space The mainproblem with standard GAs for optimization continues tobe the lack of ldquolong-termismrdquo where fitness for exploitationmay need to be secondary to allow for adequate explorationto assess the problem space for alternative solutions Onepossible way to deal with this problem is to hybridize the GAwith other approaches such as in [17] where a hybrid GA andparticle swarm optimization (PSO) is proposed for solvingmultimodal functions
James and Russell [18] examined the performance of thePSO algorithm on some continuous optimization functionsLater researchers such as Chen et al [19] modified the PSOalgorithm to help overcome the problem of premature con-vergence Additionally PSO is used for global optimizationin [20] The PSO is also hybridized with ACO to improve theperformance when dealing with high dimension multimodalfunctions [21] Further a comparative study between GA andPSO on numerical benchmark problems can be found in[22] Other versions of the PSO algorithmwith crossover andmutation operators to improve the performance have alsobeen proposed such as in [23] Once such GA operators areintroduced the question arises as to what advantages a PSOapproach has over a standard evolutionary approach to thesame problem
The bees algorithm (BA) has been used to solve a numberof benchmark functions [24] This algorithm simulates thebehavior of swarms of honey bees during food foragingIn BA scout bees are sent to search for food sources bymoving randomly from one place to another On returningto the hive they share the information with the other beesabout location distance and quality of the food sourceThe algorithm produces good results for several benchmarkfunctions However the information sharing is direct andconfined to subsets of bees leading to the possibility ofconvergence to a local optimum or divergence away from theglobal optimum
The grenade explosion method (GSM) is an evolutionaryalgorithmwhich has been used to optimize real-valued prob-lems [25] The GSM is based on the mechanism associatedwith a grenade explosion which is intended to overcomeproblems associated with ldquocrowdingrdquo where candidate solu-tions have converged to a local minimum When a grenadeexplodes shrapnel pieces are thrown to destroy objects in thevicinity of the explosion In the GSM losses are calculated
4 Journal of Optimization
1 2 i
0
1 1
0 0
1 1
0
Mtimes Pi + 1 middot middot middot
Figure 1 Representation of a continuous problem in the IWD algorithm
for each piece of shrapnel and the effect of each pieceof shrapnel is calculated The highest loss effect representsvaluable objects existing in that area Then another grenadeis thrown to the area to cause greater loss and so onOverall the GSM was able to find the global minimum fasterthan other algorithms in seven out of eight benchmarkedtests However the algorithm requires the maintenance ofnonintuitive territory radius preventing shrapnel pieces fromcoming too close to each other and forcing shrapnel tospread uniformly over the search space Also required is anexplosion range for the shrapnel These control parameterswork well with problem spaces containingmany local optimaso that global optima can be found after suitable explorationHowever the relationship between these control parametersand problem spaces of unknown topology is not clear
21 Water-Based Algorithms Algorithms inspired by waterflow have become increasingly common for tackling opti-mization problems Two of the best-known examples areintelligent water drops (IWD) and the water cycle algorithm(WCA) These algorithms have been shown to be successfulin many applications including various COPs The IWDalgorithm is a swarm-based algorithm inspired by the groundflow of water in nature and the actions and reactions thatoccur during that process [26] In IWD a water drop hastwo properties movement velocity and the amount of soilit carries These properties are changed and updated foreach water drop throughout its movement from one pointto another The velocity of a water drop is affected anddetermined only by the amount of soil between two pointswhile the amount of soil it carries is equal to the amount ofsoil being removed from a path [26]
The IWD has also been used to solve COPs Shah-Hosseini [27] utilized binary variable values in the IWDand created a directed graph of (119872 times 119875) nodes in which119872 identifies the number of variables and 119875 identifies theprecision (number of bits for each variable) which wasassumed to be 32 The graph comprised 2 times (119872times 119875) directededges connecting nodes with each edge having a value ofzero or one as depicted in Figure 1 One of the advantagesof this representation is that it is intuitively consistent withthe natural idea of water flowing in a specific direction
Also the IWD algorithm was augmented with amutation-based local search to enhance its solution qualityIn this process an edge is selected at random from a solutionand replaced by another edge which is kept if it improvesthe fitness value and the process was repeated 100 times foreach solution This mutation is applied to all the generatedsolutions and then the soil is updated on the edges belongingto the best solution However the variablesrsquo values have tobe converted from binary to real numbers to be evaluated
Also once mutation (but not crossover) is introduced intoIWD the question arises as what advantages IWD has overstandard evolutionary algorithms that use simpler notationsfor representing continuous numbers Detailed differencesbetween IWD and our HCA are described in Section 39
Various researchers have applied the WCA to COPs [2829]TheWCA is also based on the groundwater flow throughrivers and streams to the sea The algorithm constructs asolution using a population of streams where each stream isa candidate solution It initially generates random solutionsfor the problem by distributing the streams into differentpositions according to the problemrsquos dimensions The beststream is considered a sea and a specific number of streamsare considered rivers Then the streams are made to flowtowards the positions of the rivers and sea The position ofthe sea indicates the global-best solution whereas those of therivers indicate the local best solution points In each iterationthe position of each stream may be swapped with any ofthe river positions or (counterintuitively) the sea positionaccording to the fitness of that stream Therefore if a streamsolution is better than that of a river that stream becomesa river and the corresponding river becomes a stream Thesame process occurs between rivers and sea Evaporationoccurs in the riversstreams when the distance between ariverstream and the sea is very small (close to zero) Fol-lowing evaporation new streams are generated with differentpositions in a process that can be described as rain Theseprocesses help the algorithm to escape from local optima
WCA treats streams rivers and the sea as set of movingpoints in a multidimensional space and does not considerstreamriver formation (movements of water and soils)Therefore the overall structure of the WCA is quite similarto that of the PSO algorithm For instance the PSO algorithmhas Gbest and Pbest points that indicate the global and localsolutions with the Gbest point guiding the Pbest points toconverge to that point In a similar manner the WCA has anumber of rivers that represent the local optimal solutionsand a sea position represents the global-optimal solutionThesea is used as a guide for the streams and rivers Moreoverin PSO a particle at the Pbest point updates its positiontowards the Gbest point Analogously the WCA exchangesthe positions of rivers with that of the sea when the fitness ofthe river is better The main difference is the consideration ofthe evaporation and raining stages that help the algorithm toescape from local optima
An improved version ofWCAcalled dual-system (DSWCA)has been presented for solving constrained engineeringoptimization problems [30] The DSWCA improvementsinvolved adding two cycles (outer and inner)The outer cycleis in accordance with the WCA and is designed to performexploration The inner cycle which is based on the ocean
Journal of Optimization 5
cycle was designed as an exploitation processThe confluencebetween rivers process is also included to improve conver-gence speed The improvements aim to help the originalWCA to avoid falling into the local optimal solutions TheDSWCA was able to provide better results than WCA In afurther extension of WCA Heidari et al proposed a chaoticWCA (CWCA) that incorporates thirteen chaotic patternswith WCA movement process to improve WCArsquos perfor-mance and deal with the premature convergence problem[31] The experimental results demonstrated improvement incontrolling the premature convergence problem HoweverWCA and its extensions introduce many additional featuresmany of which are not compatible with nature-inspiration toimprove performance andmakeWCA competitive with non-water-based algorithms Detailed differences between WCAand our HCA are described in Section 39
The review of previous approaches shows that nature-inspired swarm approaches tend to introduce evolutionaryconcepts of mutation and crossover for sharing informa-tion to adopt nonplausible global data structures to pre-vent convergence to local optima or add more central-ized control parameters to preserve the balance betweenexploration and exploitation in increasingly difficult problemdomains thereby compromising self-organization Whensuch approaches are modified to deal with COPs complexand unnatural representations of numbers may also berequired that add to the overheads of the algorithm as well asleading towhatmay appear to be ldquoforcedrdquo and unnatural waysof dealing with the complexities of a continuous problem
In swarm algorithms a number of cooperative homoge-nous entities explore and interact with the environment toachieve a goal which is usually to find the global-optimalsolution to a problem through exploration and exploita-tion Cooperation can be accomplished by some form ofcommunication that allows the swarm members to shareexploration and exploitation information to produce high-quality solutions [32 33] Exploration aims to acquire asmuch information as possible about promising solutionswhile exploitation aims to improve such promising solutionsControlling the balance between exploration and exploitationis usually considered critical when searching formore refinedsolutions as well as more diverse solutions Also important isthe amount of exploration and exploitation information to beshared and when
Information sharing in nature-inspired algorithms usu-ally includes metrics for evaluating the usefulness of infor-mation and mechanisms for how it should be shared Inparticular these metrics and mechanisms should not containmore knowledge or require more intelligence than can beplausibly assigned to the swarm entities where the emphasisis on emergence of complex behavior from relatively simpleentities interacting in nonintelligent ways with each otherInformation sharing can be done either randomly or non-randomly among entities Different approaches are used toshare information such as direct or indirect interaction Twosharing models are one-to-one in which explicit interactionoccurs between entities and many-to-many (broadcasting)in which interaction occurs indirectly between the entitiesthrough the environment [34]
Usually either direct or indirect interaction betweenentities is employed in an algorithm for information sharingand the use of both types in the same algorithm is oftenneglected For instance the ACO algorithm uses indirectcommunication for information sharing where ants layamounts of pheromone on paths depending on the usefulnessof what they have exploredThemore promising the path themore the pheromone added leading other ants to exploitingthis path further Pheromone is not directed at any other antin particular In the artificial bee colony (ABC) algorithmbees share information directly (specifically the directionand distance to flowers) in a kind of waggle dancing [35]The information received by other bees depends on whichbee they observe and not the information gathered from allbees In a GA the information exchange occurs directly inthe form of crossover operations between selected members(chromosomes and genes) of the population The qualityof information shared depends on the members chosenand there is no overall store of information available to allmembers In PSO on the other hand the particles havetheir own information as well as access to global informationto guide their exploitation This can lead to competitiveand cooperative optimization techniques [36] However noattempt is made in standard PSO to share informationpossessed by individual particles This lack of individual-to-individual communication can lead to early convergence ofthe entire population of particles to a nonoptimal solutionSelective information sharing between individual particleswill not always avoid this narrow exploitation problem butif undertaken properly could allow the particles to findalternative andmore diverse solution spaces where the globaloptimum lies
As can be seen from the above discussion the distinctionsbetween communication (direct and indirect) and informa-tion sharing (random or nonrandom) can become blurredIn HCA we utilize both direct and indirect communicationto share information among selected members of the wholepopulation Direct communication occurs in the condensa-tion stage of the water cycle where some water drops collidewith each other when they evaporate into the cloud Onthe other hand indirect communication occurs between thewater drops and the environment via the soil and path depthUtilizing information sharing helps to diversify the searchspace and improve the overall performance through betterexploitation In particular we show howHCAwith direct andindirect communication suits continuous problems whereindividual particles share useful information directly whensearching a continuous space
Furthermore in some algorithms indirect communica-tion is used not just to find a possible solution but alsoto reinforce an already found promising solution Suchreinforcement may lead the algorithm to getting stuck in thesame solution which is known as stagnation behavior [37]Such reinforcement can also cause a premature convergencein some problems An important feature used in HCA toavoid this problem is the path depth The depths of paths areconstantly changing where deep paths will have less chanceof being chosen compared with shallow paths More detailswill be provided about this feature in Section 33
6 Journal of Optimization
In summary this paper aims to introduce a novel nature-inspired approach for dealing with COPs which also intro-duces a continuous number representation that is naturalfor the algorithm The cyclic nature of the algorithm con-tributes to self-organization as the system converges to thesolution through direct and indirect communication betweenparticles feedback from the environment and internal self-evaluation Finally our HCA addresses some of the weak-nesses of other similar algorithms HCA is inspired by the fullwater cycle not parts of it as exemplified by IWD andWCA
3 The Hydrological Cycle Algorithm
Thewater cycle also known as the hydrologic cycle describesthe continuous movement of water in nature [38] It isrenewable because water particles are constantly circulatingfrom the land to the sky and vice versaTherefore the amountof water remains constant Water drops move from one placeto another by natural physical processes such as evaporationprecipitation condensation and runoff In these processesthe water passes through different states
31 Inspiration The water cycle starts when surface watergets heated by the sun causingwater fromdifferent sources tochange into vapor and evaporate into the air Then the watervapor cools and condenses as it rises to become drops Thesedrops combine and collect with each other and eventuallybecome so saturated and dense that they fall back becauseof the force of gravity in a process called precipitation Theresulting water flows on the surface of the Earth into pondslakes or oceans where it again evaporates back into theatmosphere Then the entire cycle is repeated
In the flow (runoff) stage the water drops start movingfrom one location to another based on Earthrsquos gravity and theground topography When the water is scattered it chooses apathwith less soil and fewer obstacles Assuming no obstaclesexist water drops will take the shortest path towards thecenter of the Earth (ie oceans) owing to the force of gravityWater drops have force because of this gravity and this forcecauses changes in motion depending on the topology Asthe water flows in rivers and streams it picks up sedimentsand transports them away from their original locationTheseprocesses are known as erosion and deposition They help tocreate the topography of the ground by making new pathswith more soil removal or the fading of others as theybecome less likely to be chosen owing to too much soildeposition These processes are highly dependent on watervelocity For instance a river is continually picking up anddropping off sediments from one point to another Whenthe river flow is fast soil particles are picked up and theopposite happens when the river flow is slowMoreover thereis a strong relationship between the water velocity and thesuspension load (the amount of dissolved soil in the water)that it currently carries The water velocity is higher whenonly a small amount of soil is being carried and lower whenlarger amounts are carried
In addition the term ldquoflow raterdquo or ldquodischargerdquo can bedefined as the volume of water in a river passing a defined
point over a specific period Mathematically flow rate iscalculated based on the following equation [39]119876 = 119881 times 119860 997904rArr[119876 = 119881 times (119882 times 119863)] (3)
where119876 is flow rate119860 is cross-sectional area119881 is velocity119882is width and119863 is depthThe cross-sectional area is calculatedby multiplying the depth by the width of the stream Thevelocity of the flowing water is defined as the quantity ofdischarge divided by the cross-sectional areaTherefore thereis an inverse relationship between the velocity and the cross-sectional area [40] The velocity increases when the cross-sectional area is small and vice versa If we assume that theriver has a constant flow rate (velocity times cross-sectional area= constant) and that the width of the river is fixed then wecan conclude that the velocity is inversely proportional to thedepth of the river in which case the velocity decreases in deepwater and vice versa Therefore the amount of soil depositedincreases as the velocity decreases This explains why less soilis taken from deep water and more soil is taken from shallowwater A deep river will have water moving more slowly thana shallow river
In addition to river depth and force of gravity there areother factors that affect the flow velocity of water drops andcause them to accelerate or decelerate These factors includeobstructions amount of soil existing in the path topographyof the land variations in channel cross-sectional area and theamount of dissolved soil being carried (the suspension load)
Water evaporation increases with temperature withmax-imum evaporation at 100∘C (boiling point) Converselycondensation increases when the water vapor cools towards0∘CThe evaporation and condensation stages play importantroles in the water cycle Evaporation is necessary to ensurethat the system remains in balance In addition the evapo-ration process helps to decrease the temperature and keepsthe air moist Condensation is a very important process asit completes the water cycle When water vapor condensesit releases the same amount of thermal energy into theenvironment that was needed to make it a vapor Howeverin nature it is difficult to measure the precise evaporationrate owing to the existence of different sources of evaporationhence estimation techniques are required [38]
The condensation process starts when the water vaporrises into the sky then as the temperature decreases inthe higher layer of the atmosphere the particles with lesstemperature have more chance to condense [41] Figure 2(a)illustrates the situation where there are no attractions (orconnections) between the particles at high temperature Asthe temperature decreases the particles collide and agglom-erate to form small clusters leading to clouds as shown inFigure 2(b)
An interesting phenomenon in which some of the largewater drops may eliminate other smaller water drops occursduring the condensation process as some of the waterdropsmdashknown as collectorsmdashfall from a higher layer to alower layer in the atmosphere (in the cloud) Figure 3 depictsthe action of a water drop falling and colliding with smaller
Journal of Optimization 7
(a) Hot water (b) Cold water
Figure 2 Illustration of the effect of temperature on water drops
Figure 3 A collector water drop in action
water drops Consequently the water drop grows as a resultof coalescence with the other water drops [42]
Based on this phenomenon only water drops that aresufficiently large will survive the trip to the ground Of thenumerous water drops in the cloud only a portion will makeit to the ground
32 The Proposed Algorithm Given the brief description ofthe hydrological cycle above the IWD algorithm can beinterpreted as covering only one stage of the overall watercyclemdashthe flow stage Although the IWDalgorithm takes intoaccount some of the characteristics and factors that affectthe flowing water drops through rivers and the actions andreactions along the way the IWD algorithm neglects otherfactors that could be useful in solving optimization problemsthrough adoption of other key concepts taken from the fullhydrological process
Studying the full hydrological water cycle gave us theinspiration to design a new and potentially more powerfulalgorithm within which the original IWD algorithm can beconsidered a subpart We divide our algorithm into fourmain stages Flow (Runoff) Evaporation Condensation andPrecipitation
The HCA can be described formally as consisting of a setof artificial water drops (WDs) such that
HCA = WD1WD2WD3 WD119899 where 119899 ge 1 (4)
The characteristics associated with each water drop are asfollows
(i) 119881WD the velocity of the water drop
(ii) SoilWD the amount of soil the water drop carries(iii) 120595WD the solution quality of the water drop
The input to the HCA is a graph representation of a problemsrsquosolution spaceThe graph can be a fully connected undirectedgraph119866 = (119873 119864) where119873 is the set of nodes and 119864 is the setof edges between the nodes In order to find a solution thewater drops are distributed randomly over the nodes of thegraph and then traverse the graph (119866) according to variousrules via the edges (119864) searching for the best or optimalsolutionThe characteristics associated with each edge are theinitial amount of soil on the edge and the edge depth
The initial amount of soil is the same for all the edgesThe depth measures the distance from the water surface tothe riverbed and it can be calculated by dividing the lengthof the edge by the amount of soil Therefore the depth variesand depends on the amount of soil that exists on the pathand its length The depth of the path increases when moresoil is removed over the same length A deep path can beinterpreted as either the path being lengthy or there being lesssoil Figures 4 and 5 illustrate the relationship between pathdepth soil and path length
The HCA goes through a number of cycles and iterationsto find a solution to a problem One iteration is consideredcomplete when all water drops have generated solutionsbased on the problem constraints A water drop iterativelyconstructs a solution for the problemby continuouslymovingbetween the nodes Each iteration consists of specific steps(which will be explained below) On the other hand a cyclerepresents a varied number of iterations A cycle is consideredas being complete when the temperature reaches a specificvalue which makes the water drops evaporate condenseand precipitate again This procedure continues until thetermination condition is met
33 Flow Stage (Runoff) This stage represents the con-struction stage where the HCA explores the search spaceThis is carried out by allowing the water drops to flow(scattered or regrouped) in different directions and con-structing various solutions Each water drop constructs a
8 Journal of Optimization
Depth = 101000= 001
Soil = 500
Length = 10
Depth = 20500 = 004
Soil = 500
Length = 20
Figure 4 Depth increases with length increases for the same amount of soil
Depth = 10500= 002
Soil = 500
Length = 10
Soil = 1000
Length = 10
Depth = 101000= 001
Figure 5 Depth increases when the amount of soil is reduced over the same length
solution incrementally by moving from one point to anotherWhenever a water drop wants to move towards a new nodeit has to choose between different numbers of nodes (variousbranches) It calculates the probability of all the unvisitednodes and chooses the highest probability node taking intoconsideration the problem constraints This can be describedas a state transition rule that determines the next node tovisit In HCA the node with the highest probability will beselected The probability of the node is calculated using119875WD
119894 (119895)= 119891 (Soil (119894 119895))2 times 119892 (Depth (119894 119895))sum119896notinV119888(WD) (119891 (Soil (119894 119896))2 times 119892 (Depth (119894 119896))) (5)
where 119875WD119894(119895) is the probability of choosing node 119895 from
node 119894 119891(Soil(119894 119895)) is equal to the inverse of the soil between119894 and 119895 and is calculated using119891 (Soil (119894 119895)) = 1120576 + Soil (119894 119895) (6)
120576 = 001 is a small value that is used to prevent division byzero The second factor of the transition rule is the inverse ofdepth which is calculated based on119892 (Depth (119894 119895)) = 1
Depth (119894 119895) (7)
Depth(119894 119895) is the depth between nodes 119894 and 119895 and iscalculated by dividing the length of the path by the amountof soil This can be expressed as follows
Depth (119894 119895) = Length (119894 119895)Soil (119894 119895) (8)
The depth value can be normalized to be in the range [1ndash100]as it might be very small The depth of the path is constantlychanging because it is influenced by the amount of existing
S
A
B
S
A
B
S
A
B
Figure 6 The relationship between soil and depth over the samelength
soil where the depth is inversely proportional to the amountof the existing soil in the path of a fixed length Waterdrops will choose paths with less depth over other pathsThis enables exploration of new paths and diversifies thegenerated solutions Assume the following scenario (depictedby Figure 6) where water drops have to choose betweennodes 119860 or 119861 after node 119878 Initially both edges have the samelength and same amount of soil (line thickness represents soilamount) Consequently the depth values for the two edgeswill be the sameAfter somewater drops chose node119860 to visitthe edge (S-A) as a result has less soil and is deeper Accordingto the new state transition rule the next water drops will beforced to choose node 119861 because edge (S-B) will be shallowerthan (S-A)This technique provides a way to avoid stagnationand explore the search space efficiently
331 VelocityUpdate After selecting the next node thewaterdrop moves to the selected node and marks it as visited Eachwater drop has a variable velocity (V) The velocity of a waterdropmight be increased or decreasedwhile it ismoving basedon the factors mentioned above (the amount of soil existingon the path the depth of the path etc) Mathematically thevelocity of a water drop at time (119905 + 1) can be calculated using
119881WD119905+1 = [119870 times 119881WD
119905 ] + 120572( 119881WD
Soil (119894 119895)) + 2radic 119881WD
SoilWD
+ ( 100120595WD) + 2radic 119881WD
Depth (119894 119895) (9)
Journal of Optimization 9
Equation (9) defines how the water drop updates its velocityafter each movement Each part of the equation has an effecton the new velocity of the water drop where 119881WD
119905 is thecurrent water drop velocity and 119870 is a uniformly distributedrandom number between [0 1] that refers to the roughnesscoefficient The roughness coefficient represents factors thatmay cause the water drop velocity to decrease Owing todifficulties measuring the exact effect of these factors somerandomness is considered with the velocity
The second term of the expression in (9) reflects therelationship between the amount of soil existing on a path andthe velocity of a water drop The velocity of the water dropsincreases when they traverse paths with less soil Alpha (120572)is a relative influence coefficient that emphasizes this part in
the velocity update equationThe third term of the expressionrepresents the relationship between the amount of soil thata water drop is currently carrying and its velocity Waterdrop velocity increases with decreasing amount of soil beingcarriedThis gives weakwater drops a chance to increase theirvelocity as they do not holdmuch soilThe fourth term of theexpression refers to the inverse ratio of a water droprsquos fitnesswith velocity increasing with higher fitness The final term ofthe expression indicates that a water drop will be slower in adeep path than in a shallow path
332 Soil Update Next the amount of soil existing on thepath and the depth of that path are updated A water dropcan remove (or add) soil from (or to) a path while movingbased on its velocity This is expressed by
Soil (119894 119895) = [119875119873 lowast Soil (119894 119895)] minus ΔSoil (119894 119895) minus 2radic 1
Depth (119894 119895) if 119881WD ge Avg (all119881WDS) (Erosion)[119875119873 lowast Soil (119894 119895)] + ΔSoil (119894 119895) + 2radic 1
Depth (119894 119895) else (Deposition) (10)
119875119873 represents a coefficient (ie sediment transport rate orgradation coefficient) that may affect the reduction in theamount of soil The soil can be removed only if the currentwater drop velocity is greater than the average of all otherwater drops Otherwise an amount of soil is added (soildeposition) Consequently the water drop is able to transferan amount of soil from one place to another and usuallytransfers it from fast to slow parts of the path The increasingsoil amount on some paths favors the exploration of otherpaths during the search process and avoids entrapment inlocal optimal solutions The amount of soil existing betweennode 119894 and node 119895 is calculated and changed usingΔSoil (119894 119895) = 1
timeWD119894119895
(11)
such that
timeWD119894119895 = Distance (119894 119895)119881WD
119905+1
(12)
Equation (11) shows that the amount of soil being removedis inversely proportional to time Based on the second lawof motion time is equal to distance divided by velocityTherefore time is proportional to velocity and inverselyproportional to path length Furthermore the amount ofsoil being removed or added is inversely proportional to thedepth of the pathThis is because shallow soils will be easy toremove compared to deep soilsTherefore the amount of soilbeing removed will decrease as the path depth increasesThisfactor facilitates exploration of new paths because less soil isremoved from the deep paths as they have been used manytimes before This may extend the time needed to search forand reach optimal solutionsThedepth of the path needs to beupdatedwhen the amount of soil existing on the path changesusing (8)
333 Soil Transportation Update In the HCA we considerthe fact that the amount of soil a water drop carries reflects itssolution qualityThis can be done by associating the quality ofthe water droprsquos solution with its carrying soil value Figure 7illustrates the fact that a water drop with more carrying soilhas a better solution
To simulate this association the amount of soil that thewater drop is carrying is updated with a portion of the soilbeing removed from the path divided by the last solutionquality found by that water drop Therefore the water dropwith a better solution will carry more soil This can beexpressed as follows
SoilWD = SoilWD + ΔSoil (119894 119895)120595WD (13)
The idea behind this step is to let a water drop preserves itsprevious fitness value Later the amount of carrying soil valueis used in updating the velocity of the water drops
334 Temperature Update The final step that occurs at thisstage is updating of the temperature The new temperaturevalue depends on the solution quality generated by the waterdrops in the previous iterations The temperature will beupdated as follows
Temp (119905 + 1) = Temp (119905) + +ΔTemp (14)
where
ΔTemp = 120573 lowast (Temp (119905)Δ119863 ) Δ119863 gt 0Temp (119905)10 otherwise
(15)
10 Journal of Optimization
Better
Figure 7 Relationship between the amount of carrying soil and itsquality
and where coefficient 120573 is determined based on the problemThe difference (Δ119863) is calculated based onΔ119863 = MaxValue minusMinValue (16)
such that
MaxValue = max [Solutions Quality (WDs)] MinValue = min [Solutions Quality (WDs)] (17)
According to (16) increase in temperature will be affected bythe difference between the best solution (MinValue) and theworst solution (MaxValue) Less difference means that therewas not a lot of improvement in the last few iterations and asa result the temperature will increase more This means thatthere is no need to evaporate the water drops as long as theycan find different solutions in each iteration The flow stagewill repeat a few times until the temperature reaches a certainvalue When the temperature is sufficient the evaporationstage begins
34 Evaporation Stage In this stage the number of waterdrops that will evaporate (the evaporation rate) when thetemperature reaches a specific value is first determined Theevaporation rate is chosen randomly between one and thetotal number of water drops
ER = Random (1119873) (18)
where ER is evaporation rate and119873 is number ofwater dropsAccording to the evaporation rate a specific number of waterdrops is selected to evaporateThe selection is done using theroulette-wheel selection method taking into considerationthe solution quality of each water drop Only the selectedwater drops evaporate and go to the next stage
35 Condensation Stage As the temperature decreases waterdrops draw closer to each other and then collide and combineor bounce off These operations are useful as they allowthe water drops to be in direct physical contact with eachother This stage represents the collective and cooperativebehavior between all the water drops A water drop cangenerate a solution without direct communication (by itself)however communication between water drops may lead tothe generation of better solutions in the next cycle In HCAphysical contact is used for direct communication betweenwater drops whereas the amount of soil on the edges canbe considered indirect communication between the water
drops Direct communication (information exchange) hasbeen proven to improve solution quality significantly whenapplied by other algorithms [37]
The condensation stage is considered to be a problem-dependent stage that can be redesigned to fit the problemspecification For example one way of implementing thisstage to fit the Travelling Salesman Problem (TSP) is to uselocal improvement methods Using these methods enhancesthe solution quality and generates better results in the nextcycle (ie minimizes the total cost of the solution) with thehypothesis that it will reduce the number of iterations neededand help to reach the optimalnear-optimal solution fasterEquation (19) shows the evaporatedwater (EW) selected fromthe overall population where 119899 represents the evaporationrate (number of evaporated water drops)
EW = WD1WD2 WD119899 (19)
Initially all the possible combinations (as pairs) are selectedfrom the evaporated water drops to share their informationwith each other via collision When two water drops collidethere is a chance either of the two water drops merging(coalescence) or of them bouncing off each other In naturedetermining which operation will take place is based onfactors such as the temperature and the velocity of each waterdrop In this research we considered the similarity betweenthe water dropsrsquo solutions to determine which process willoccur When the similarity is greater than or equal to 50between two water dropsrsquo solutions they merge Otherwisethey collide and bounce off each other Mathematically thiscan be represented as follows
OP (WD1WD2)= Bounce (WD1WD2) Similarity lt 50
Merge (WD1WD2) Similarity ge 50 (20)
We use the Hamming distance [43] to count the number ofdifferent positions in two series that have the same length toobtain a measure of the similarity between water drops Forexample the Hamming distance between 12468 and 13458 istwoWhen twowater drops collide andmerge onewater drop(ie the collector) will becomemore powerful by eliminatingthe other one in the process also acquiring the characteristicsof the eliminated water drop (ie its velocity)
WD1Vel = WD1Vel +WD2Vel (21)
On the other hand when two water drops collide and bounceoff they will share information with each other about thegoodness of each node and how much a node contributes totheir solutions The bounce-off operation generates informa-tion that is used later to refine the quality of the water dropsrsquosolutions in the next cycle The information is available to allwater drops and helps them to choose a node that has a bettercontribution from all the possible nodes at the flow stageTheinformation consists of the weight of each node The weightmeasures a node occurrence within a solution over the waterdrop solution qualityThe idea behind sharing these details is
Journal of Optimization 11
to emphasize the best nodes found up to that point Withinthis exchange the water drops will favor those nodes in thenext cycle Mathematically the weight can be calculated asfollows
Weight (node) = NodeoccurrenceWDsol
(22)
Finally this stage is used to update the global-best solutionfound up to that point In addition at this stage thetemperature is reduced by 50∘C The lowering and raisingof the temperature helps to prevent the water drops fromsticking with the same solution every iteration
36 Precipitation Stage This stage is considered the termina-tion stage of the cycle as the algorithm has to check whetherthe termination condition is met If the condition has beenmet the algorithm stops with the last global-best solutionOtherwise this stage is responsible for reinitializing all thedynamic variables such as the amount of the soil on eachedge depth of paths the velocity of each water drop and theamount of soil it holds The reinitialization of the parametershelps the algorithm to avoid being trapped in local optimawhich may affect the algorithmrsquos performance in the nextcycle Moreover this stage is considered as a reinforcementstage which is used to place emphasis on the best water drop(the collector) This is achieved by reducing the amount ofsoil on the edges that belong to the best water drop solution
Soil (119894 119895) = 09 lowast soil (119894 119895) forall (119894 119895) isin BestWD (23)
The idea behind that is to favor these edges over the otheredges in the next cycle
37 Summarization of HCA Characteristics TheHCA can bedistinguished from other swarm algorithms in the followingways
(1) HCA involves a number of cycles and number ofiterations (multiple iterations per cycle)This gives theadvantage of performing certain tasks (eg solutionimprovements methods) every cycle instead of everyiteration to reduce the computational effort In addi-tion the cyclic nature of the algorithm contributesto self-organization as the system converges to thesolution through feedback from the environmentand internal self-evaluationThe temperature variablecontrols the cycle and its value is updated accordingto the quality of the solution obtained in everyiteration
(2) The flow of the water drops is controlled by pathsdepth and soil amount heuristics (indirect commu-nication) Paths depth factor affects the movement ofwater drops and helps to avoid choosing the samepaths by all water drops
(3) Water drops are able to gain or lose portion of theirvelocity This idea supports the competition betweenwater drops and prevents the sweep of one drop forthe rest of the drops because a high-velocity water
HCA procedure(i) Problem input(ii) Initialization of parameters(iii)While (termination condition is not met)
Cycle = Cycle + 1 Flow stageRepeat
(a) Iteration = iteration + 1(b) For each water drop
(1) Choose next node (Eqs (5)ndash(8))(2) Update velocity (Eq (9))(3) Update soil and depth (Eqs (10)-(11))(4) Update carrying soil (Eq (13))
(c) End for(d) Calculate the solution fitness(e) Update local optimal solution(f) Update Temperature (Eqs (14)ndash(17))
Until (Temperature = Evaporation Temperature)Evaporation (WDs)Condensation (WDs)Precipitation (WDs)
(iv) End while
Pseudocode 1 HCA pseudocode
drop can carve more paths (ie remove more soils)than slower one
(4) Soil can be deposited and removed which helpsto avoid a premature convergence and stimulatesthe spread of drops in different directions (moreexploration)
(5) Condensation stage is utilized to perform informa-tion sharing (direct communication) among thewaterdrops Moreover selecting a collector water droprepresents an elitism technique and that helps toperform exploitation for the good solutions Furtherthis stage can be used to improve the quality of theobtained solutions to reduce the number of iterations
(6) The evaporation process is a selection technique andhelps the algorithm to escape from local optimasolutionsThe evaporation happenswhen there are noimprovements in the quality of the obtained solutions
(7) Precipitation stage is used to reinitialize variables andredistribute WDs in the search space to generate newsolutions
Condensation evaporation and precipitation ((5) (6) and(7)) above distinguish HCA from other water-based algo-rithms including IWD and WCA These characteristics areimportant and help make a proper balance between explo-ration exploitation which helps in convergence towards theglobal solution In addition the stages of the water cycle arecomplementary to each other and help improve the overallperformance of the HCA
38TheHCA Procedure Pseudocodes 1 and 2 show the HCApseudocode
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Journal of Optimization
1 2 i
0
1 1
0 0
1 1
0
Mtimes Pi + 1 middot middot middot
Figure 1 Representation of a continuous problem in the IWD algorithm
for each piece of shrapnel and the effect of each pieceof shrapnel is calculated The highest loss effect representsvaluable objects existing in that area Then another grenadeis thrown to the area to cause greater loss and so onOverall the GSM was able to find the global minimum fasterthan other algorithms in seven out of eight benchmarkedtests However the algorithm requires the maintenance ofnonintuitive territory radius preventing shrapnel pieces fromcoming too close to each other and forcing shrapnel tospread uniformly over the search space Also required is anexplosion range for the shrapnel These control parameterswork well with problem spaces containingmany local optimaso that global optima can be found after suitable explorationHowever the relationship between these control parametersand problem spaces of unknown topology is not clear
21 Water-Based Algorithms Algorithms inspired by waterflow have become increasingly common for tackling opti-mization problems Two of the best-known examples areintelligent water drops (IWD) and the water cycle algorithm(WCA) These algorithms have been shown to be successfulin many applications including various COPs The IWDalgorithm is a swarm-based algorithm inspired by the groundflow of water in nature and the actions and reactions thatoccur during that process [26] In IWD a water drop hastwo properties movement velocity and the amount of soilit carries These properties are changed and updated foreach water drop throughout its movement from one pointto another The velocity of a water drop is affected anddetermined only by the amount of soil between two pointswhile the amount of soil it carries is equal to the amount ofsoil being removed from a path [26]
The IWD has also been used to solve COPs Shah-Hosseini [27] utilized binary variable values in the IWDand created a directed graph of (119872 times 119875) nodes in which119872 identifies the number of variables and 119875 identifies theprecision (number of bits for each variable) which wasassumed to be 32 The graph comprised 2 times (119872times 119875) directededges connecting nodes with each edge having a value ofzero or one as depicted in Figure 1 One of the advantagesof this representation is that it is intuitively consistent withthe natural idea of water flowing in a specific direction
Also the IWD algorithm was augmented with amutation-based local search to enhance its solution qualityIn this process an edge is selected at random from a solutionand replaced by another edge which is kept if it improvesthe fitness value and the process was repeated 100 times foreach solution This mutation is applied to all the generatedsolutions and then the soil is updated on the edges belongingto the best solution However the variablesrsquo values have tobe converted from binary to real numbers to be evaluated
Also once mutation (but not crossover) is introduced intoIWD the question arises as what advantages IWD has overstandard evolutionary algorithms that use simpler notationsfor representing continuous numbers Detailed differencesbetween IWD and our HCA are described in Section 39
Various researchers have applied the WCA to COPs [2829]TheWCA is also based on the groundwater flow throughrivers and streams to the sea The algorithm constructs asolution using a population of streams where each stream isa candidate solution It initially generates random solutionsfor the problem by distributing the streams into differentpositions according to the problemrsquos dimensions The beststream is considered a sea and a specific number of streamsare considered rivers Then the streams are made to flowtowards the positions of the rivers and sea The position ofthe sea indicates the global-best solution whereas those of therivers indicate the local best solution points In each iterationthe position of each stream may be swapped with any ofthe river positions or (counterintuitively) the sea positionaccording to the fitness of that stream Therefore if a streamsolution is better than that of a river that stream becomesa river and the corresponding river becomes a stream Thesame process occurs between rivers and sea Evaporationoccurs in the riversstreams when the distance between ariverstream and the sea is very small (close to zero) Fol-lowing evaporation new streams are generated with differentpositions in a process that can be described as rain Theseprocesses help the algorithm to escape from local optima
WCA treats streams rivers and the sea as set of movingpoints in a multidimensional space and does not considerstreamriver formation (movements of water and soils)Therefore the overall structure of the WCA is quite similarto that of the PSO algorithm For instance the PSO algorithmhas Gbest and Pbest points that indicate the global and localsolutions with the Gbest point guiding the Pbest points toconverge to that point In a similar manner the WCA has anumber of rivers that represent the local optimal solutionsand a sea position represents the global-optimal solutionThesea is used as a guide for the streams and rivers Moreoverin PSO a particle at the Pbest point updates its positiontowards the Gbest point Analogously the WCA exchangesthe positions of rivers with that of the sea when the fitness ofthe river is better The main difference is the consideration ofthe evaporation and raining stages that help the algorithm toescape from local optima
An improved version ofWCAcalled dual-system (DSWCA)has been presented for solving constrained engineeringoptimization problems [30] The DSWCA improvementsinvolved adding two cycles (outer and inner)The outer cycleis in accordance with the WCA and is designed to performexploration The inner cycle which is based on the ocean
Journal of Optimization 5
cycle was designed as an exploitation processThe confluencebetween rivers process is also included to improve conver-gence speed The improvements aim to help the originalWCA to avoid falling into the local optimal solutions TheDSWCA was able to provide better results than WCA In afurther extension of WCA Heidari et al proposed a chaoticWCA (CWCA) that incorporates thirteen chaotic patternswith WCA movement process to improve WCArsquos perfor-mance and deal with the premature convergence problem[31] The experimental results demonstrated improvement incontrolling the premature convergence problem HoweverWCA and its extensions introduce many additional featuresmany of which are not compatible with nature-inspiration toimprove performance andmakeWCA competitive with non-water-based algorithms Detailed differences between WCAand our HCA are described in Section 39
The review of previous approaches shows that nature-inspired swarm approaches tend to introduce evolutionaryconcepts of mutation and crossover for sharing informa-tion to adopt nonplausible global data structures to pre-vent convergence to local optima or add more central-ized control parameters to preserve the balance betweenexploration and exploitation in increasingly difficult problemdomains thereby compromising self-organization Whensuch approaches are modified to deal with COPs complexand unnatural representations of numbers may also berequired that add to the overheads of the algorithm as well asleading towhatmay appear to be ldquoforcedrdquo and unnatural waysof dealing with the complexities of a continuous problem
In swarm algorithms a number of cooperative homoge-nous entities explore and interact with the environment toachieve a goal which is usually to find the global-optimalsolution to a problem through exploration and exploita-tion Cooperation can be accomplished by some form ofcommunication that allows the swarm members to shareexploration and exploitation information to produce high-quality solutions [32 33] Exploration aims to acquire asmuch information as possible about promising solutionswhile exploitation aims to improve such promising solutionsControlling the balance between exploration and exploitationis usually considered critical when searching formore refinedsolutions as well as more diverse solutions Also important isthe amount of exploration and exploitation information to beshared and when
Information sharing in nature-inspired algorithms usu-ally includes metrics for evaluating the usefulness of infor-mation and mechanisms for how it should be shared Inparticular these metrics and mechanisms should not containmore knowledge or require more intelligence than can beplausibly assigned to the swarm entities where the emphasisis on emergence of complex behavior from relatively simpleentities interacting in nonintelligent ways with each otherInformation sharing can be done either randomly or non-randomly among entities Different approaches are used toshare information such as direct or indirect interaction Twosharing models are one-to-one in which explicit interactionoccurs between entities and many-to-many (broadcasting)in which interaction occurs indirectly between the entitiesthrough the environment [34]
Usually either direct or indirect interaction betweenentities is employed in an algorithm for information sharingand the use of both types in the same algorithm is oftenneglected For instance the ACO algorithm uses indirectcommunication for information sharing where ants layamounts of pheromone on paths depending on the usefulnessof what they have exploredThemore promising the path themore the pheromone added leading other ants to exploitingthis path further Pheromone is not directed at any other antin particular In the artificial bee colony (ABC) algorithmbees share information directly (specifically the directionand distance to flowers) in a kind of waggle dancing [35]The information received by other bees depends on whichbee they observe and not the information gathered from allbees In a GA the information exchange occurs directly inthe form of crossover operations between selected members(chromosomes and genes) of the population The qualityof information shared depends on the members chosenand there is no overall store of information available to allmembers In PSO on the other hand the particles havetheir own information as well as access to global informationto guide their exploitation This can lead to competitiveand cooperative optimization techniques [36] However noattempt is made in standard PSO to share informationpossessed by individual particles This lack of individual-to-individual communication can lead to early convergence ofthe entire population of particles to a nonoptimal solutionSelective information sharing between individual particleswill not always avoid this narrow exploitation problem butif undertaken properly could allow the particles to findalternative andmore diverse solution spaces where the globaloptimum lies
As can be seen from the above discussion the distinctionsbetween communication (direct and indirect) and informa-tion sharing (random or nonrandom) can become blurredIn HCA we utilize both direct and indirect communicationto share information among selected members of the wholepopulation Direct communication occurs in the condensa-tion stage of the water cycle where some water drops collidewith each other when they evaporate into the cloud Onthe other hand indirect communication occurs between thewater drops and the environment via the soil and path depthUtilizing information sharing helps to diversify the searchspace and improve the overall performance through betterexploitation In particular we show howHCAwith direct andindirect communication suits continuous problems whereindividual particles share useful information directly whensearching a continuous space
Furthermore in some algorithms indirect communica-tion is used not just to find a possible solution but alsoto reinforce an already found promising solution Suchreinforcement may lead the algorithm to getting stuck in thesame solution which is known as stagnation behavior [37]Such reinforcement can also cause a premature convergencein some problems An important feature used in HCA toavoid this problem is the path depth The depths of paths areconstantly changing where deep paths will have less chanceof being chosen compared with shallow paths More detailswill be provided about this feature in Section 33
6 Journal of Optimization
In summary this paper aims to introduce a novel nature-inspired approach for dealing with COPs which also intro-duces a continuous number representation that is naturalfor the algorithm The cyclic nature of the algorithm con-tributes to self-organization as the system converges to thesolution through direct and indirect communication betweenparticles feedback from the environment and internal self-evaluation Finally our HCA addresses some of the weak-nesses of other similar algorithms HCA is inspired by the fullwater cycle not parts of it as exemplified by IWD andWCA
3 The Hydrological Cycle Algorithm
Thewater cycle also known as the hydrologic cycle describesthe continuous movement of water in nature [38] It isrenewable because water particles are constantly circulatingfrom the land to the sky and vice versaTherefore the amountof water remains constant Water drops move from one placeto another by natural physical processes such as evaporationprecipitation condensation and runoff In these processesthe water passes through different states
31 Inspiration The water cycle starts when surface watergets heated by the sun causingwater fromdifferent sources tochange into vapor and evaporate into the air Then the watervapor cools and condenses as it rises to become drops Thesedrops combine and collect with each other and eventuallybecome so saturated and dense that they fall back becauseof the force of gravity in a process called precipitation Theresulting water flows on the surface of the Earth into pondslakes or oceans where it again evaporates back into theatmosphere Then the entire cycle is repeated
In the flow (runoff) stage the water drops start movingfrom one location to another based on Earthrsquos gravity and theground topography When the water is scattered it chooses apathwith less soil and fewer obstacles Assuming no obstaclesexist water drops will take the shortest path towards thecenter of the Earth (ie oceans) owing to the force of gravityWater drops have force because of this gravity and this forcecauses changes in motion depending on the topology Asthe water flows in rivers and streams it picks up sedimentsand transports them away from their original locationTheseprocesses are known as erosion and deposition They help tocreate the topography of the ground by making new pathswith more soil removal or the fading of others as theybecome less likely to be chosen owing to too much soildeposition These processes are highly dependent on watervelocity For instance a river is continually picking up anddropping off sediments from one point to another Whenthe river flow is fast soil particles are picked up and theopposite happens when the river flow is slowMoreover thereis a strong relationship between the water velocity and thesuspension load (the amount of dissolved soil in the water)that it currently carries The water velocity is higher whenonly a small amount of soil is being carried and lower whenlarger amounts are carried
In addition the term ldquoflow raterdquo or ldquodischargerdquo can bedefined as the volume of water in a river passing a defined
point over a specific period Mathematically flow rate iscalculated based on the following equation [39]119876 = 119881 times 119860 997904rArr[119876 = 119881 times (119882 times 119863)] (3)
where119876 is flow rate119860 is cross-sectional area119881 is velocity119882is width and119863 is depthThe cross-sectional area is calculatedby multiplying the depth by the width of the stream Thevelocity of the flowing water is defined as the quantity ofdischarge divided by the cross-sectional areaTherefore thereis an inverse relationship between the velocity and the cross-sectional area [40] The velocity increases when the cross-sectional area is small and vice versa If we assume that theriver has a constant flow rate (velocity times cross-sectional area= constant) and that the width of the river is fixed then wecan conclude that the velocity is inversely proportional to thedepth of the river in which case the velocity decreases in deepwater and vice versa Therefore the amount of soil depositedincreases as the velocity decreases This explains why less soilis taken from deep water and more soil is taken from shallowwater A deep river will have water moving more slowly thana shallow river
In addition to river depth and force of gravity there areother factors that affect the flow velocity of water drops andcause them to accelerate or decelerate These factors includeobstructions amount of soil existing in the path topographyof the land variations in channel cross-sectional area and theamount of dissolved soil being carried (the suspension load)
Water evaporation increases with temperature withmax-imum evaporation at 100∘C (boiling point) Converselycondensation increases when the water vapor cools towards0∘CThe evaporation and condensation stages play importantroles in the water cycle Evaporation is necessary to ensurethat the system remains in balance In addition the evapo-ration process helps to decrease the temperature and keepsthe air moist Condensation is a very important process asit completes the water cycle When water vapor condensesit releases the same amount of thermal energy into theenvironment that was needed to make it a vapor Howeverin nature it is difficult to measure the precise evaporationrate owing to the existence of different sources of evaporationhence estimation techniques are required [38]
The condensation process starts when the water vaporrises into the sky then as the temperature decreases inthe higher layer of the atmosphere the particles with lesstemperature have more chance to condense [41] Figure 2(a)illustrates the situation where there are no attractions (orconnections) between the particles at high temperature Asthe temperature decreases the particles collide and agglom-erate to form small clusters leading to clouds as shown inFigure 2(b)
An interesting phenomenon in which some of the largewater drops may eliminate other smaller water drops occursduring the condensation process as some of the waterdropsmdashknown as collectorsmdashfall from a higher layer to alower layer in the atmosphere (in the cloud) Figure 3 depictsthe action of a water drop falling and colliding with smaller
Journal of Optimization 7
(a) Hot water (b) Cold water
Figure 2 Illustration of the effect of temperature on water drops
Figure 3 A collector water drop in action
water drops Consequently the water drop grows as a resultof coalescence with the other water drops [42]
Based on this phenomenon only water drops that aresufficiently large will survive the trip to the ground Of thenumerous water drops in the cloud only a portion will makeit to the ground
32 The Proposed Algorithm Given the brief description ofthe hydrological cycle above the IWD algorithm can beinterpreted as covering only one stage of the overall watercyclemdashthe flow stage Although the IWDalgorithm takes intoaccount some of the characteristics and factors that affectthe flowing water drops through rivers and the actions andreactions along the way the IWD algorithm neglects otherfactors that could be useful in solving optimization problemsthrough adoption of other key concepts taken from the fullhydrological process
Studying the full hydrological water cycle gave us theinspiration to design a new and potentially more powerfulalgorithm within which the original IWD algorithm can beconsidered a subpart We divide our algorithm into fourmain stages Flow (Runoff) Evaporation Condensation andPrecipitation
The HCA can be described formally as consisting of a setof artificial water drops (WDs) such that
HCA = WD1WD2WD3 WD119899 where 119899 ge 1 (4)
The characteristics associated with each water drop are asfollows
(i) 119881WD the velocity of the water drop
(ii) SoilWD the amount of soil the water drop carries(iii) 120595WD the solution quality of the water drop
The input to the HCA is a graph representation of a problemsrsquosolution spaceThe graph can be a fully connected undirectedgraph119866 = (119873 119864) where119873 is the set of nodes and 119864 is the setof edges between the nodes In order to find a solution thewater drops are distributed randomly over the nodes of thegraph and then traverse the graph (119866) according to variousrules via the edges (119864) searching for the best or optimalsolutionThe characteristics associated with each edge are theinitial amount of soil on the edge and the edge depth
The initial amount of soil is the same for all the edgesThe depth measures the distance from the water surface tothe riverbed and it can be calculated by dividing the lengthof the edge by the amount of soil Therefore the depth variesand depends on the amount of soil that exists on the pathand its length The depth of the path increases when moresoil is removed over the same length A deep path can beinterpreted as either the path being lengthy or there being lesssoil Figures 4 and 5 illustrate the relationship between pathdepth soil and path length
The HCA goes through a number of cycles and iterationsto find a solution to a problem One iteration is consideredcomplete when all water drops have generated solutionsbased on the problem constraints A water drop iterativelyconstructs a solution for the problemby continuouslymovingbetween the nodes Each iteration consists of specific steps(which will be explained below) On the other hand a cyclerepresents a varied number of iterations A cycle is consideredas being complete when the temperature reaches a specificvalue which makes the water drops evaporate condenseand precipitate again This procedure continues until thetermination condition is met
33 Flow Stage (Runoff) This stage represents the con-struction stage where the HCA explores the search spaceThis is carried out by allowing the water drops to flow(scattered or regrouped) in different directions and con-structing various solutions Each water drop constructs a
8 Journal of Optimization
Depth = 101000= 001
Soil = 500
Length = 10
Depth = 20500 = 004
Soil = 500
Length = 20
Figure 4 Depth increases with length increases for the same amount of soil
Depth = 10500= 002
Soil = 500
Length = 10
Soil = 1000
Length = 10
Depth = 101000= 001
Figure 5 Depth increases when the amount of soil is reduced over the same length
solution incrementally by moving from one point to anotherWhenever a water drop wants to move towards a new nodeit has to choose between different numbers of nodes (variousbranches) It calculates the probability of all the unvisitednodes and chooses the highest probability node taking intoconsideration the problem constraints This can be describedas a state transition rule that determines the next node tovisit In HCA the node with the highest probability will beselected The probability of the node is calculated using119875WD
119894 (119895)= 119891 (Soil (119894 119895))2 times 119892 (Depth (119894 119895))sum119896notinV119888(WD) (119891 (Soil (119894 119896))2 times 119892 (Depth (119894 119896))) (5)
where 119875WD119894(119895) is the probability of choosing node 119895 from
node 119894 119891(Soil(119894 119895)) is equal to the inverse of the soil between119894 and 119895 and is calculated using119891 (Soil (119894 119895)) = 1120576 + Soil (119894 119895) (6)
120576 = 001 is a small value that is used to prevent division byzero The second factor of the transition rule is the inverse ofdepth which is calculated based on119892 (Depth (119894 119895)) = 1
Depth (119894 119895) (7)
Depth(119894 119895) is the depth between nodes 119894 and 119895 and iscalculated by dividing the length of the path by the amountof soil This can be expressed as follows
Depth (119894 119895) = Length (119894 119895)Soil (119894 119895) (8)
The depth value can be normalized to be in the range [1ndash100]as it might be very small The depth of the path is constantlychanging because it is influenced by the amount of existing
S
A
B
S
A
B
S
A
B
Figure 6 The relationship between soil and depth over the samelength
soil where the depth is inversely proportional to the amountof the existing soil in the path of a fixed length Waterdrops will choose paths with less depth over other pathsThis enables exploration of new paths and diversifies thegenerated solutions Assume the following scenario (depictedby Figure 6) where water drops have to choose betweennodes 119860 or 119861 after node 119878 Initially both edges have the samelength and same amount of soil (line thickness represents soilamount) Consequently the depth values for the two edgeswill be the sameAfter somewater drops chose node119860 to visitthe edge (S-A) as a result has less soil and is deeper Accordingto the new state transition rule the next water drops will beforced to choose node 119861 because edge (S-B) will be shallowerthan (S-A)This technique provides a way to avoid stagnationand explore the search space efficiently
331 VelocityUpdate After selecting the next node thewaterdrop moves to the selected node and marks it as visited Eachwater drop has a variable velocity (V) The velocity of a waterdropmight be increased or decreasedwhile it ismoving basedon the factors mentioned above (the amount of soil existingon the path the depth of the path etc) Mathematically thevelocity of a water drop at time (119905 + 1) can be calculated using
119881WD119905+1 = [119870 times 119881WD
119905 ] + 120572( 119881WD
Soil (119894 119895)) + 2radic 119881WD
SoilWD
+ ( 100120595WD) + 2radic 119881WD
Depth (119894 119895) (9)
Journal of Optimization 9
Equation (9) defines how the water drop updates its velocityafter each movement Each part of the equation has an effecton the new velocity of the water drop where 119881WD
119905 is thecurrent water drop velocity and 119870 is a uniformly distributedrandom number between [0 1] that refers to the roughnesscoefficient The roughness coefficient represents factors thatmay cause the water drop velocity to decrease Owing todifficulties measuring the exact effect of these factors somerandomness is considered with the velocity
The second term of the expression in (9) reflects therelationship between the amount of soil existing on a path andthe velocity of a water drop The velocity of the water dropsincreases when they traverse paths with less soil Alpha (120572)is a relative influence coefficient that emphasizes this part in
the velocity update equationThe third term of the expressionrepresents the relationship between the amount of soil thata water drop is currently carrying and its velocity Waterdrop velocity increases with decreasing amount of soil beingcarriedThis gives weakwater drops a chance to increase theirvelocity as they do not holdmuch soilThe fourth term of theexpression refers to the inverse ratio of a water droprsquos fitnesswith velocity increasing with higher fitness The final term ofthe expression indicates that a water drop will be slower in adeep path than in a shallow path
332 Soil Update Next the amount of soil existing on thepath and the depth of that path are updated A water dropcan remove (or add) soil from (or to) a path while movingbased on its velocity This is expressed by
Soil (119894 119895) = [119875119873 lowast Soil (119894 119895)] minus ΔSoil (119894 119895) minus 2radic 1
Depth (119894 119895) if 119881WD ge Avg (all119881WDS) (Erosion)[119875119873 lowast Soil (119894 119895)] + ΔSoil (119894 119895) + 2radic 1
Depth (119894 119895) else (Deposition) (10)
119875119873 represents a coefficient (ie sediment transport rate orgradation coefficient) that may affect the reduction in theamount of soil The soil can be removed only if the currentwater drop velocity is greater than the average of all otherwater drops Otherwise an amount of soil is added (soildeposition) Consequently the water drop is able to transferan amount of soil from one place to another and usuallytransfers it from fast to slow parts of the path The increasingsoil amount on some paths favors the exploration of otherpaths during the search process and avoids entrapment inlocal optimal solutions The amount of soil existing betweennode 119894 and node 119895 is calculated and changed usingΔSoil (119894 119895) = 1
timeWD119894119895
(11)
such that
timeWD119894119895 = Distance (119894 119895)119881WD
119905+1
(12)
Equation (11) shows that the amount of soil being removedis inversely proportional to time Based on the second lawof motion time is equal to distance divided by velocityTherefore time is proportional to velocity and inverselyproportional to path length Furthermore the amount ofsoil being removed or added is inversely proportional to thedepth of the pathThis is because shallow soils will be easy toremove compared to deep soilsTherefore the amount of soilbeing removed will decrease as the path depth increasesThisfactor facilitates exploration of new paths because less soil isremoved from the deep paths as they have been used manytimes before This may extend the time needed to search forand reach optimal solutionsThedepth of the path needs to beupdatedwhen the amount of soil existing on the path changesusing (8)
333 Soil Transportation Update In the HCA we considerthe fact that the amount of soil a water drop carries reflects itssolution qualityThis can be done by associating the quality ofthe water droprsquos solution with its carrying soil value Figure 7illustrates the fact that a water drop with more carrying soilhas a better solution
To simulate this association the amount of soil that thewater drop is carrying is updated with a portion of the soilbeing removed from the path divided by the last solutionquality found by that water drop Therefore the water dropwith a better solution will carry more soil This can beexpressed as follows
SoilWD = SoilWD + ΔSoil (119894 119895)120595WD (13)
The idea behind this step is to let a water drop preserves itsprevious fitness value Later the amount of carrying soil valueis used in updating the velocity of the water drops
334 Temperature Update The final step that occurs at thisstage is updating of the temperature The new temperaturevalue depends on the solution quality generated by the waterdrops in the previous iterations The temperature will beupdated as follows
Temp (119905 + 1) = Temp (119905) + +ΔTemp (14)
where
ΔTemp = 120573 lowast (Temp (119905)Δ119863 ) Δ119863 gt 0Temp (119905)10 otherwise
(15)
10 Journal of Optimization
Better
Figure 7 Relationship between the amount of carrying soil and itsquality
and where coefficient 120573 is determined based on the problemThe difference (Δ119863) is calculated based onΔ119863 = MaxValue minusMinValue (16)
such that
MaxValue = max [Solutions Quality (WDs)] MinValue = min [Solutions Quality (WDs)] (17)
According to (16) increase in temperature will be affected bythe difference between the best solution (MinValue) and theworst solution (MaxValue) Less difference means that therewas not a lot of improvement in the last few iterations and asa result the temperature will increase more This means thatthere is no need to evaporate the water drops as long as theycan find different solutions in each iteration The flow stagewill repeat a few times until the temperature reaches a certainvalue When the temperature is sufficient the evaporationstage begins
34 Evaporation Stage In this stage the number of waterdrops that will evaporate (the evaporation rate) when thetemperature reaches a specific value is first determined Theevaporation rate is chosen randomly between one and thetotal number of water drops
ER = Random (1119873) (18)
where ER is evaporation rate and119873 is number ofwater dropsAccording to the evaporation rate a specific number of waterdrops is selected to evaporateThe selection is done using theroulette-wheel selection method taking into considerationthe solution quality of each water drop Only the selectedwater drops evaporate and go to the next stage
35 Condensation Stage As the temperature decreases waterdrops draw closer to each other and then collide and combineor bounce off These operations are useful as they allowthe water drops to be in direct physical contact with eachother This stage represents the collective and cooperativebehavior between all the water drops A water drop cangenerate a solution without direct communication (by itself)however communication between water drops may lead tothe generation of better solutions in the next cycle In HCAphysical contact is used for direct communication betweenwater drops whereas the amount of soil on the edges canbe considered indirect communication between the water
drops Direct communication (information exchange) hasbeen proven to improve solution quality significantly whenapplied by other algorithms [37]
The condensation stage is considered to be a problem-dependent stage that can be redesigned to fit the problemspecification For example one way of implementing thisstage to fit the Travelling Salesman Problem (TSP) is to uselocal improvement methods Using these methods enhancesthe solution quality and generates better results in the nextcycle (ie minimizes the total cost of the solution) with thehypothesis that it will reduce the number of iterations neededand help to reach the optimalnear-optimal solution fasterEquation (19) shows the evaporatedwater (EW) selected fromthe overall population where 119899 represents the evaporationrate (number of evaporated water drops)
EW = WD1WD2 WD119899 (19)
Initially all the possible combinations (as pairs) are selectedfrom the evaporated water drops to share their informationwith each other via collision When two water drops collidethere is a chance either of the two water drops merging(coalescence) or of them bouncing off each other In naturedetermining which operation will take place is based onfactors such as the temperature and the velocity of each waterdrop In this research we considered the similarity betweenthe water dropsrsquo solutions to determine which process willoccur When the similarity is greater than or equal to 50between two water dropsrsquo solutions they merge Otherwisethey collide and bounce off each other Mathematically thiscan be represented as follows
OP (WD1WD2)= Bounce (WD1WD2) Similarity lt 50
Merge (WD1WD2) Similarity ge 50 (20)
We use the Hamming distance [43] to count the number ofdifferent positions in two series that have the same length toobtain a measure of the similarity between water drops Forexample the Hamming distance between 12468 and 13458 istwoWhen twowater drops collide andmerge onewater drop(ie the collector) will becomemore powerful by eliminatingthe other one in the process also acquiring the characteristicsof the eliminated water drop (ie its velocity)
WD1Vel = WD1Vel +WD2Vel (21)
On the other hand when two water drops collide and bounceoff they will share information with each other about thegoodness of each node and how much a node contributes totheir solutions The bounce-off operation generates informa-tion that is used later to refine the quality of the water dropsrsquosolutions in the next cycle The information is available to allwater drops and helps them to choose a node that has a bettercontribution from all the possible nodes at the flow stageTheinformation consists of the weight of each node The weightmeasures a node occurrence within a solution over the waterdrop solution qualityThe idea behind sharing these details is
Journal of Optimization 11
to emphasize the best nodes found up to that point Withinthis exchange the water drops will favor those nodes in thenext cycle Mathematically the weight can be calculated asfollows
Weight (node) = NodeoccurrenceWDsol
(22)
Finally this stage is used to update the global-best solutionfound up to that point In addition at this stage thetemperature is reduced by 50∘C The lowering and raisingof the temperature helps to prevent the water drops fromsticking with the same solution every iteration
36 Precipitation Stage This stage is considered the termina-tion stage of the cycle as the algorithm has to check whetherthe termination condition is met If the condition has beenmet the algorithm stops with the last global-best solutionOtherwise this stage is responsible for reinitializing all thedynamic variables such as the amount of the soil on eachedge depth of paths the velocity of each water drop and theamount of soil it holds The reinitialization of the parametershelps the algorithm to avoid being trapped in local optimawhich may affect the algorithmrsquos performance in the nextcycle Moreover this stage is considered as a reinforcementstage which is used to place emphasis on the best water drop(the collector) This is achieved by reducing the amount ofsoil on the edges that belong to the best water drop solution
Soil (119894 119895) = 09 lowast soil (119894 119895) forall (119894 119895) isin BestWD (23)
The idea behind that is to favor these edges over the otheredges in the next cycle
37 Summarization of HCA Characteristics TheHCA can bedistinguished from other swarm algorithms in the followingways
(1) HCA involves a number of cycles and number ofiterations (multiple iterations per cycle)This gives theadvantage of performing certain tasks (eg solutionimprovements methods) every cycle instead of everyiteration to reduce the computational effort In addi-tion the cyclic nature of the algorithm contributesto self-organization as the system converges to thesolution through feedback from the environmentand internal self-evaluationThe temperature variablecontrols the cycle and its value is updated accordingto the quality of the solution obtained in everyiteration
(2) The flow of the water drops is controlled by pathsdepth and soil amount heuristics (indirect commu-nication) Paths depth factor affects the movement ofwater drops and helps to avoid choosing the samepaths by all water drops
(3) Water drops are able to gain or lose portion of theirvelocity This idea supports the competition betweenwater drops and prevents the sweep of one drop forthe rest of the drops because a high-velocity water
HCA procedure(i) Problem input(ii) Initialization of parameters(iii)While (termination condition is not met)
Cycle = Cycle + 1 Flow stageRepeat
(a) Iteration = iteration + 1(b) For each water drop
(1) Choose next node (Eqs (5)ndash(8))(2) Update velocity (Eq (9))(3) Update soil and depth (Eqs (10)-(11))(4) Update carrying soil (Eq (13))
(c) End for(d) Calculate the solution fitness(e) Update local optimal solution(f) Update Temperature (Eqs (14)ndash(17))
Until (Temperature = Evaporation Temperature)Evaporation (WDs)Condensation (WDs)Precipitation (WDs)
(iv) End while
Pseudocode 1 HCA pseudocode
drop can carve more paths (ie remove more soils)than slower one
(4) Soil can be deposited and removed which helpsto avoid a premature convergence and stimulatesthe spread of drops in different directions (moreexploration)
(5) Condensation stage is utilized to perform informa-tion sharing (direct communication) among thewaterdrops Moreover selecting a collector water droprepresents an elitism technique and that helps toperform exploitation for the good solutions Furtherthis stage can be used to improve the quality of theobtained solutions to reduce the number of iterations
(6) The evaporation process is a selection technique andhelps the algorithm to escape from local optimasolutionsThe evaporation happenswhen there are noimprovements in the quality of the obtained solutions
(7) Precipitation stage is used to reinitialize variables andredistribute WDs in the search space to generate newsolutions
Condensation evaporation and precipitation ((5) (6) and(7)) above distinguish HCA from other water-based algo-rithms including IWD and WCA These characteristics areimportant and help make a proper balance between explo-ration exploitation which helps in convergence towards theglobal solution In addition the stages of the water cycle arecomplementary to each other and help improve the overallperformance of the HCA
38TheHCA Procedure Pseudocodes 1 and 2 show the HCApseudocode
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 5
cycle was designed as an exploitation processThe confluencebetween rivers process is also included to improve conver-gence speed The improvements aim to help the originalWCA to avoid falling into the local optimal solutions TheDSWCA was able to provide better results than WCA In afurther extension of WCA Heidari et al proposed a chaoticWCA (CWCA) that incorporates thirteen chaotic patternswith WCA movement process to improve WCArsquos perfor-mance and deal with the premature convergence problem[31] The experimental results demonstrated improvement incontrolling the premature convergence problem HoweverWCA and its extensions introduce many additional featuresmany of which are not compatible with nature-inspiration toimprove performance andmakeWCA competitive with non-water-based algorithms Detailed differences between WCAand our HCA are described in Section 39
The review of previous approaches shows that nature-inspired swarm approaches tend to introduce evolutionaryconcepts of mutation and crossover for sharing informa-tion to adopt nonplausible global data structures to pre-vent convergence to local optima or add more central-ized control parameters to preserve the balance betweenexploration and exploitation in increasingly difficult problemdomains thereby compromising self-organization Whensuch approaches are modified to deal with COPs complexand unnatural representations of numbers may also berequired that add to the overheads of the algorithm as well asleading towhatmay appear to be ldquoforcedrdquo and unnatural waysof dealing with the complexities of a continuous problem
In swarm algorithms a number of cooperative homoge-nous entities explore and interact with the environment toachieve a goal which is usually to find the global-optimalsolution to a problem through exploration and exploita-tion Cooperation can be accomplished by some form ofcommunication that allows the swarm members to shareexploration and exploitation information to produce high-quality solutions [32 33] Exploration aims to acquire asmuch information as possible about promising solutionswhile exploitation aims to improve such promising solutionsControlling the balance between exploration and exploitationis usually considered critical when searching formore refinedsolutions as well as more diverse solutions Also important isthe amount of exploration and exploitation information to beshared and when
Information sharing in nature-inspired algorithms usu-ally includes metrics for evaluating the usefulness of infor-mation and mechanisms for how it should be shared Inparticular these metrics and mechanisms should not containmore knowledge or require more intelligence than can beplausibly assigned to the swarm entities where the emphasisis on emergence of complex behavior from relatively simpleentities interacting in nonintelligent ways with each otherInformation sharing can be done either randomly or non-randomly among entities Different approaches are used toshare information such as direct or indirect interaction Twosharing models are one-to-one in which explicit interactionoccurs between entities and many-to-many (broadcasting)in which interaction occurs indirectly between the entitiesthrough the environment [34]
Usually either direct or indirect interaction betweenentities is employed in an algorithm for information sharingand the use of both types in the same algorithm is oftenneglected For instance the ACO algorithm uses indirectcommunication for information sharing where ants layamounts of pheromone on paths depending on the usefulnessof what they have exploredThemore promising the path themore the pheromone added leading other ants to exploitingthis path further Pheromone is not directed at any other antin particular In the artificial bee colony (ABC) algorithmbees share information directly (specifically the directionand distance to flowers) in a kind of waggle dancing [35]The information received by other bees depends on whichbee they observe and not the information gathered from allbees In a GA the information exchange occurs directly inthe form of crossover operations between selected members(chromosomes and genes) of the population The qualityof information shared depends on the members chosenand there is no overall store of information available to allmembers In PSO on the other hand the particles havetheir own information as well as access to global informationto guide their exploitation This can lead to competitiveand cooperative optimization techniques [36] However noattempt is made in standard PSO to share informationpossessed by individual particles This lack of individual-to-individual communication can lead to early convergence ofthe entire population of particles to a nonoptimal solutionSelective information sharing between individual particleswill not always avoid this narrow exploitation problem butif undertaken properly could allow the particles to findalternative andmore diverse solution spaces where the globaloptimum lies
As can be seen from the above discussion the distinctionsbetween communication (direct and indirect) and informa-tion sharing (random or nonrandom) can become blurredIn HCA we utilize both direct and indirect communicationto share information among selected members of the wholepopulation Direct communication occurs in the condensa-tion stage of the water cycle where some water drops collidewith each other when they evaporate into the cloud Onthe other hand indirect communication occurs between thewater drops and the environment via the soil and path depthUtilizing information sharing helps to diversify the searchspace and improve the overall performance through betterexploitation In particular we show howHCAwith direct andindirect communication suits continuous problems whereindividual particles share useful information directly whensearching a continuous space
Furthermore in some algorithms indirect communica-tion is used not just to find a possible solution but alsoto reinforce an already found promising solution Suchreinforcement may lead the algorithm to getting stuck in thesame solution which is known as stagnation behavior [37]Such reinforcement can also cause a premature convergencein some problems An important feature used in HCA toavoid this problem is the path depth The depths of paths areconstantly changing where deep paths will have less chanceof being chosen compared with shallow paths More detailswill be provided about this feature in Section 33
6 Journal of Optimization
In summary this paper aims to introduce a novel nature-inspired approach for dealing with COPs which also intro-duces a continuous number representation that is naturalfor the algorithm The cyclic nature of the algorithm con-tributes to self-organization as the system converges to thesolution through direct and indirect communication betweenparticles feedback from the environment and internal self-evaluation Finally our HCA addresses some of the weak-nesses of other similar algorithms HCA is inspired by the fullwater cycle not parts of it as exemplified by IWD andWCA
3 The Hydrological Cycle Algorithm
Thewater cycle also known as the hydrologic cycle describesthe continuous movement of water in nature [38] It isrenewable because water particles are constantly circulatingfrom the land to the sky and vice versaTherefore the amountof water remains constant Water drops move from one placeto another by natural physical processes such as evaporationprecipitation condensation and runoff In these processesthe water passes through different states
31 Inspiration The water cycle starts when surface watergets heated by the sun causingwater fromdifferent sources tochange into vapor and evaporate into the air Then the watervapor cools and condenses as it rises to become drops Thesedrops combine and collect with each other and eventuallybecome so saturated and dense that they fall back becauseof the force of gravity in a process called precipitation Theresulting water flows on the surface of the Earth into pondslakes or oceans where it again evaporates back into theatmosphere Then the entire cycle is repeated
In the flow (runoff) stage the water drops start movingfrom one location to another based on Earthrsquos gravity and theground topography When the water is scattered it chooses apathwith less soil and fewer obstacles Assuming no obstaclesexist water drops will take the shortest path towards thecenter of the Earth (ie oceans) owing to the force of gravityWater drops have force because of this gravity and this forcecauses changes in motion depending on the topology Asthe water flows in rivers and streams it picks up sedimentsand transports them away from their original locationTheseprocesses are known as erosion and deposition They help tocreate the topography of the ground by making new pathswith more soil removal or the fading of others as theybecome less likely to be chosen owing to too much soildeposition These processes are highly dependent on watervelocity For instance a river is continually picking up anddropping off sediments from one point to another Whenthe river flow is fast soil particles are picked up and theopposite happens when the river flow is slowMoreover thereis a strong relationship between the water velocity and thesuspension load (the amount of dissolved soil in the water)that it currently carries The water velocity is higher whenonly a small amount of soil is being carried and lower whenlarger amounts are carried
In addition the term ldquoflow raterdquo or ldquodischargerdquo can bedefined as the volume of water in a river passing a defined
point over a specific period Mathematically flow rate iscalculated based on the following equation [39]119876 = 119881 times 119860 997904rArr[119876 = 119881 times (119882 times 119863)] (3)
where119876 is flow rate119860 is cross-sectional area119881 is velocity119882is width and119863 is depthThe cross-sectional area is calculatedby multiplying the depth by the width of the stream Thevelocity of the flowing water is defined as the quantity ofdischarge divided by the cross-sectional areaTherefore thereis an inverse relationship between the velocity and the cross-sectional area [40] The velocity increases when the cross-sectional area is small and vice versa If we assume that theriver has a constant flow rate (velocity times cross-sectional area= constant) and that the width of the river is fixed then wecan conclude that the velocity is inversely proportional to thedepth of the river in which case the velocity decreases in deepwater and vice versa Therefore the amount of soil depositedincreases as the velocity decreases This explains why less soilis taken from deep water and more soil is taken from shallowwater A deep river will have water moving more slowly thana shallow river
In addition to river depth and force of gravity there areother factors that affect the flow velocity of water drops andcause them to accelerate or decelerate These factors includeobstructions amount of soil existing in the path topographyof the land variations in channel cross-sectional area and theamount of dissolved soil being carried (the suspension load)
Water evaporation increases with temperature withmax-imum evaporation at 100∘C (boiling point) Converselycondensation increases when the water vapor cools towards0∘CThe evaporation and condensation stages play importantroles in the water cycle Evaporation is necessary to ensurethat the system remains in balance In addition the evapo-ration process helps to decrease the temperature and keepsthe air moist Condensation is a very important process asit completes the water cycle When water vapor condensesit releases the same amount of thermal energy into theenvironment that was needed to make it a vapor Howeverin nature it is difficult to measure the precise evaporationrate owing to the existence of different sources of evaporationhence estimation techniques are required [38]
The condensation process starts when the water vaporrises into the sky then as the temperature decreases inthe higher layer of the atmosphere the particles with lesstemperature have more chance to condense [41] Figure 2(a)illustrates the situation where there are no attractions (orconnections) between the particles at high temperature Asthe temperature decreases the particles collide and agglom-erate to form small clusters leading to clouds as shown inFigure 2(b)
An interesting phenomenon in which some of the largewater drops may eliminate other smaller water drops occursduring the condensation process as some of the waterdropsmdashknown as collectorsmdashfall from a higher layer to alower layer in the atmosphere (in the cloud) Figure 3 depictsthe action of a water drop falling and colliding with smaller
Journal of Optimization 7
(a) Hot water (b) Cold water
Figure 2 Illustration of the effect of temperature on water drops
Figure 3 A collector water drop in action
water drops Consequently the water drop grows as a resultof coalescence with the other water drops [42]
Based on this phenomenon only water drops that aresufficiently large will survive the trip to the ground Of thenumerous water drops in the cloud only a portion will makeit to the ground
32 The Proposed Algorithm Given the brief description ofthe hydrological cycle above the IWD algorithm can beinterpreted as covering only one stage of the overall watercyclemdashthe flow stage Although the IWDalgorithm takes intoaccount some of the characteristics and factors that affectthe flowing water drops through rivers and the actions andreactions along the way the IWD algorithm neglects otherfactors that could be useful in solving optimization problemsthrough adoption of other key concepts taken from the fullhydrological process
Studying the full hydrological water cycle gave us theinspiration to design a new and potentially more powerfulalgorithm within which the original IWD algorithm can beconsidered a subpart We divide our algorithm into fourmain stages Flow (Runoff) Evaporation Condensation andPrecipitation
The HCA can be described formally as consisting of a setof artificial water drops (WDs) such that
HCA = WD1WD2WD3 WD119899 where 119899 ge 1 (4)
The characteristics associated with each water drop are asfollows
(i) 119881WD the velocity of the water drop
(ii) SoilWD the amount of soil the water drop carries(iii) 120595WD the solution quality of the water drop
The input to the HCA is a graph representation of a problemsrsquosolution spaceThe graph can be a fully connected undirectedgraph119866 = (119873 119864) where119873 is the set of nodes and 119864 is the setof edges between the nodes In order to find a solution thewater drops are distributed randomly over the nodes of thegraph and then traverse the graph (119866) according to variousrules via the edges (119864) searching for the best or optimalsolutionThe characteristics associated with each edge are theinitial amount of soil on the edge and the edge depth
The initial amount of soil is the same for all the edgesThe depth measures the distance from the water surface tothe riverbed and it can be calculated by dividing the lengthof the edge by the amount of soil Therefore the depth variesand depends on the amount of soil that exists on the pathand its length The depth of the path increases when moresoil is removed over the same length A deep path can beinterpreted as either the path being lengthy or there being lesssoil Figures 4 and 5 illustrate the relationship between pathdepth soil and path length
The HCA goes through a number of cycles and iterationsto find a solution to a problem One iteration is consideredcomplete when all water drops have generated solutionsbased on the problem constraints A water drop iterativelyconstructs a solution for the problemby continuouslymovingbetween the nodes Each iteration consists of specific steps(which will be explained below) On the other hand a cyclerepresents a varied number of iterations A cycle is consideredas being complete when the temperature reaches a specificvalue which makes the water drops evaporate condenseand precipitate again This procedure continues until thetermination condition is met
33 Flow Stage (Runoff) This stage represents the con-struction stage where the HCA explores the search spaceThis is carried out by allowing the water drops to flow(scattered or regrouped) in different directions and con-structing various solutions Each water drop constructs a
8 Journal of Optimization
Depth = 101000= 001
Soil = 500
Length = 10
Depth = 20500 = 004
Soil = 500
Length = 20
Figure 4 Depth increases with length increases for the same amount of soil
Depth = 10500= 002
Soil = 500
Length = 10
Soil = 1000
Length = 10
Depth = 101000= 001
Figure 5 Depth increases when the amount of soil is reduced over the same length
solution incrementally by moving from one point to anotherWhenever a water drop wants to move towards a new nodeit has to choose between different numbers of nodes (variousbranches) It calculates the probability of all the unvisitednodes and chooses the highest probability node taking intoconsideration the problem constraints This can be describedas a state transition rule that determines the next node tovisit In HCA the node with the highest probability will beselected The probability of the node is calculated using119875WD
119894 (119895)= 119891 (Soil (119894 119895))2 times 119892 (Depth (119894 119895))sum119896notinV119888(WD) (119891 (Soil (119894 119896))2 times 119892 (Depth (119894 119896))) (5)
where 119875WD119894(119895) is the probability of choosing node 119895 from
node 119894 119891(Soil(119894 119895)) is equal to the inverse of the soil between119894 and 119895 and is calculated using119891 (Soil (119894 119895)) = 1120576 + Soil (119894 119895) (6)
120576 = 001 is a small value that is used to prevent division byzero The second factor of the transition rule is the inverse ofdepth which is calculated based on119892 (Depth (119894 119895)) = 1
Depth (119894 119895) (7)
Depth(119894 119895) is the depth between nodes 119894 and 119895 and iscalculated by dividing the length of the path by the amountof soil This can be expressed as follows
Depth (119894 119895) = Length (119894 119895)Soil (119894 119895) (8)
The depth value can be normalized to be in the range [1ndash100]as it might be very small The depth of the path is constantlychanging because it is influenced by the amount of existing
S
A
B
S
A
B
S
A
B
Figure 6 The relationship between soil and depth over the samelength
soil where the depth is inversely proportional to the amountof the existing soil in the path of a fixed length Waterdrops will choose paths with less depth over other pathsThis enables exploration of new paths and diversifies thegenerated solutions Assume the following scenario (depictedby Figure 6) where water drops have to choose betweennodes 119860 or 119861 after node 119878 Initially both edges have the samelength and same amount of soil (line thickness represents soilamount) Consequently the depth values for the two edgeswill be the sameAfter somewater drops chose node119860 to visitthe edge (S-A) as a result has less soil and is deeper Accordingto the new state transition rule the next water drops will beforced to choose node 119861 because edge (S-B) will be shallowerthan (S-A)This technique provides a way to avoid stagnationand explore the search space efficiently
331 VelocityUpdate After selecting the next node thewaterdrop moves to the selected node and marks it as visited Eachwater drop has a variable velocity (V) The velocity of a waterdropmight be increased or decreasedwhile it ismoving basedon the factors mentioned above (the amount of soil existingon the path the depth of the path etc) Mathematically thevelocity of a water drop at time (119905 + 1) can be calculated using
119881WD119905+1 = [119870 times 119881WD
119905 ] + 120572( 119881WD
Soil (119894 119895)) + 2radic 119881WD
SoilWD
+ ( 100120595WD) + 2radic 119881WD
Depth (119894 119895) (9)
Journal of Optimization 9
Equation (9) defines how the water drop updates its velocityafter each movement Each part of the equation has an effecton the new velocity of the water drop where 119881WD
119905 is thecurrent water drop velocity and 119870 is a uniformly distributedrandom number between [0 1] that refers to the roughnesscoefficient The roughness coefficient represents factors thatmay cause the water drop velocity to decrease Owing todifficulties measuring the exact effect of these factors somerandomness is considered with the velocity
The second term of the expression in (9) reflects therelationship between the amount of soil existing on a path andthe velocity of a water drop The velocity of the water dropsincreases when they traverse paths with less soil Alpha (120572)is a relative influence coefficient that emphasizes this part in
the velocity update equationThe third term of the expressionrepresents the relationship between the amount of soil thata water drop is currently carrying and its velocity Waterdrop velocity increases with decreasing amount of soil beingcarriedThis gives weakwater drops a chance to increase theirvelocity as they do not holdmuch soilThe fourth term of theexpression refers to the inverse ratio of a water droprsquos fitnesswith velocity increasing with higher fitness The final term ofthe expression indicates that a water drop will be slower in adeep path than in a shallow path
332 Soil Update Next the amount of soil existing on thepath and the depth of that path are updated A water dropcan remove (or add) soil from (or to) a path while movingbased on its velocity This is expressed by
Soil (119894 119895) = [119875119873 lowast Soil (119894 119895)] minus ΔSoil (119894 119895) minus 2radic 1
Depth (119894 119895) if 119881WD ge Avg (all119881WDS) (Erosion)[119875119873 lowast Soil (119894 119895)] + ΔSoil (119894 119895) + 2radic 1
Depth (119894 119895) else (Deposition) (10)
119875119873 represents a coefficient (ie sediment transport rate orgradation coefficient) that may affect the reduction in theamount of soil The soil can be removed only if the currentwater drop velocity is greater than the average of all otherwater drops Otherwise an amount of soil is added (soildeposition) Consequently the water drop is able to transferan amount of soil from one place to another and usuallytransfers it from fast to slow parts of the path The increasingsoil amount on some paths favors the exploration of otherpaths during the search process and avoids entrapment inlocal optimal solutions The amount of soil existing betweennode 119894 and node 119895 is calculated and changed usingΔSoil (119894 119895) = 1
timeWD119894119895
(11)
such that
timeWD119894119895 = Distance (119894 119895)119881WD
119905+1
(12)
Equation (11) shows that the amount of soil being removedis inversely proportional to time Based on the second lawof motion time is equal to distance divided by velocityTherefore time is proportional to velocity and inverselyproportional to path length Furthermore the amount ofsoil being removed or added is inversely proportional to thedepth of the pathThis is because shallow soils will be easy toremove compared to deep soilsTherefore the amount of soilbeing removed will decrease as the path depth increasesThisfactor facilitates exploration of new paths because less soil isremoved from the deep paths as they have been used manytimes before This may extend the time needed to search forand reach optimal solutionsThedepth of the path needs to beupdatedwhen the amount of soil existing on the path changesusing (8)
333 Soil Transportation Update In the HCA we considerthe fact that the amount of soil a water drop carries reflects itssolution qualityThis can be done by associating the quality ofthe water droprsquos solution with its carrying soil value Figure 7illustrates the fact that a water drop with more carrying soilhas a better solution
To simulate this association the amount of soil that thewater drop is carrying is updated with a portion of the soilbeing removed from the path divided by the last solutionquality found by that water drop Therefore the water dropwith a better solution will carry more soil This can beexpressed as follows
SoilWD = SoilWD + ΔSoil (119894 119895)120595WD (13)
The idea behind this step is to let a water drop preserves itsprevious fitness value Later the amount of carrying soil valueis used in updating the velocity of the water drops
334 Temperature Update The final step that occurs at thisstage is updating of the temperature The new temperaturevalue depends on the solution quality generated by the waterdrops in the previous iterations The temperature will beupdated as follows
Temp (119905 + 1) = Temp (119905) + +ΔTemp (14)
where
ΔTemp = 120573 lowast (Temp (119905)Δ119863 ) Δ119863 gt 0Temp (119905)10 otherwise
(15)
10 Journal of Optimization
Better
Figure 7 Relationship between the amount of carrying soil and itsquality
and where coefficient 120573 is determined based on the problemThe difference (Δ119863) is calculated based onΔ119863 = MaxValue minusMinValue (16)
such that
MaxValue = max [Solutions Quality (WDs)] MinValue = min [Solutions Quality (WDs)] (17)
According to (16) increase in temperature will be affected bythe difference between the best solution (MinValue) and theworst solution (MaxValue) Less difference means that therewas not a lot of improvement in the last few iterations and asa result the temperature will increase more This means thatthere is no need to evaporate the water drops as long as theycan find different solutions in each iteration The flow stagewill repeat a few times until the temperature reaches a certainvalue When the temperature is sufficient the evaporationstage begins
34 Evaporation Stage In this stage the number of waterdrops that will evaporate (the evaporation rate) when thetemperature reaches a specific value is first determined Theevaporation rate is chosen randomly between one and thetotal number of water drops
ER = Random (1119873) (18)
where ER is evaporation rate and119873 is number ofwater dropsAccording to the evaporation rate a specific number of waterdrops is selected to evaporateThe selection is done using theroulette-wheel selection method taking into considerationthe solution quality of each water drop Only the selectedwater drops evaporate and go to the next stage
35 Condensation Stage As the temperature decreases waterdrops draw closer to each other and then collide and combineor bounce off These operations are useful as they allowthe water drops to be in direct physical contact with eachother This stage represents the collective and cooperativebehavior between all the water drops A water drop cangenerate a solution without direct communication (by itself)however communication between water drops may lead tothe generation of better solutions in the next cycle In HCAphysical contact is used for direct communication betweenwater drops whereas the amount of soil on the edges canbe considered indirect communication between the water
drops Direct communication (information exchange) hasbeen proven to improve solution quality significantly whenapplied by other algorithms [37]
The condensation stage is considered to be a problem-dependent stage that can be redesigned to fit the problemspecification For example one way of implementing thisstage to fit the Travelling Salesman Problem (TSP) is to uselocal improvement methods Using these methods enhancesthe solution quality and generates better results in the nextcycle (ie minimizes the total cost of the solution) with thehypothesis that it will reduce the number of iterations neededand help to reach the optimalnear-optimal solution fasterEquation (19) shows the evaporatedwater (EW) selected fromthe overall population where 119899 represents the evaporationrate (number of evaporated water drops)
EW = WD1WD2 WD119899 (19)
Initially all the possible combinations (as pairs) are selectedfrom the evaporated water drops to share their informationwith each other via collision When two water drops collidethere is a chance either of the two water drops merging(coalescence) or of them bouncing off each other In naturedetermining which operation will take place is based onfactors such as the temperature and the velocity of each waterdrop In this research we considered the similarity betweenthe water dropsrsquo solutions to determine which process willoccur When the similarity is greater than or equal to 50between two water dropsrsquo solutions they merge Otherwisethey collide and bounce off each other Mathematically thiscan be represented as follows
OP (WD1WD2)= Bounce (WD1WD2) Similarity lt 50
Merge (WD1WD2) Similarity ge 50 (20)
We use the Hamming distance [43] to count the number ofdifferent positions in two series that have the same length toobtain a measure of the similarity between water drops Forexample the Hamming distance between 12468 and 13458 istwoWhen twowater drops collide andmerge onewater drop(ie the collector) will becomemore powerful by eliminatingthe other one in the process also acquiring the characteristicsof the eliminated water drop (ie its velocity)
WD1Vel = WD1Vel +WD2Vel (21)
On the other hand when two water drops collide and bounceoff they will share information with each other about thegoodness of each node and how much a node contributes totheir solutions The bounce-off operation generates informa-tion that is used later to refine the quality of the water dropsrsquosolutions in the next cycle The information is available to allwater drops and helps them to choose a node that has a bettercontribution from all the possible nodes at the flow stageTheinformation consists of the weight of each node The weightmeasures a node occurrence within a solution over the waterdrop solution qualityThe idea behind sharing these details is
Journal of Optimization 11
to emphasize the best nodes found up to that point Withinthis exchange the water drops will favor those nodes in thenext cycle Mathematically the weight can be calculated asfollows
Weight (node) = NodeoccurrenceWDsol
(22)
Finally this stage is used to update the global-best solutionfound up to that point In addition at this stage thetemperature is reduced by 50∘C The lowering and raisingof the temperature helps to prevent the water drops fromsticking with the same solution every iteration
36 Precipitation Stage This stage is considered the termina-tion stage of the cycle as the algorithm has to check whetherthe termination condition is met If the condition has beenmet the algorithm stops with the last global-best solutionOtherwise this stage is responsible for reinitializing all thedynamic variables such as the amount of the soil on eachedge depth of paths the velocity of each water drop and theamount of soil it holds The reinitialization of the parametershelps the algorithm to avoid being trapped in local optimawhich may affect the algorithmrsquos performance in the nextcycle Moreover this stage is considered as a reinforcementstage which is used to place emphasis on the best water drop(the collector) This is achieved by reducing the amount ofsoil on the edges that belong to the best water drop solution
Soil (119894 119895) = 09 lowast soil (119894 119895) forall (119894 119895) isin BestWD (23)
The idea behind that is to favor these edges over the otheredges in the next cycle
37 Summarization of HCA Characteristics TheHCA can bedistinguished from other swarm algorithms in the followingways
(1) HCA involves a number of cycles and number ofiterations (multiple iterations per cycle)This gives theadvantage of performing certain tasks (eg solutionimprovements methods) every cycle instead of everyiteration to reduce the computational effort In addi-tion the cyclic nature of the algorithm contributesto self-organization as the system converges to thesolution through feedback from the environmentand internal self-evaluationThe temperature variablecontrols the cycle and its value is updated accordingto the quality of the solution obtained in everyiteration
(2) The flow of the water drops is controlled by pathsdepth and soil amount heuristics (indirect commu-nication) Paths depth factor affects the movement ofwater drops and helps to avoid choosing the samepaths by all water drops
(3) Water drops are able to gain or lose portion of theirvelocity This idea supports the competition betweenwater drops and prevents the sweep of one drop forthe rest of the drops because a high-velocity water
HCA procedure(i) Problem input(ii) Initialization of parameters(iii)While (termination condition is not met)
Cycle = Cycle + 1 Flow stageRepeat
(a) Iteration = iteration + 1(b) For each water drop
(1) Choose next node (Eqs (5)ndash(8))(2) Update velocity (Eq (9))(3) Update soil and depth (Eqs (10)-(11))(4) Update carrying soil (Eq (13))
(c) End for(d) Calculate the solution fitness(e) Update local optimal solution(f) Update Temperature (Eqs (14)ndash(17))
Until (Temperature = Evaporation Temperature)Evaporation (WDs)Condensation (WDs)Precipitation (WDs)
(iv) End while
Pseudocode 1 HCA pseudocode
drop can carve more paths (ie remove more soils)than slower one
(4) Soil can be deposited and removed which helpsto avoid a premature convergence and stimulatesthe spread of drops in different directions (moreexploration)
(5) Condensation stage is utilized to perform informa-tion sharing (direct communication) among thewaterdrops Moreover selecting a collector water droprepresents an elitism technique and that helps toperform exploitation for the good solutions Furtherthis stage can be used to improve the quality of theobtained solutions to reduce the number of iterations
(6) The evaporation process is a selection technique andhelps the algorithm to escape from local optimasolutionsThe evaporation happenswhen there are noimprovements in the quality of the obtained solutions
(7) Precipitation stage is used to reinitialize variables andredistribute WDs in the search space to generate newsolutions
Condensation evaporation and precipitation ((5) (6) and(7)) above distinguish HCA from other water-based algo-rithms including IWD and WCA These characteristics areimportant and help make a proper balance between explo-ration exploitation which helps in convergence towards theglobal solution In addition the stages of the water cycle arecomplementary to each other and help improve the overallperformance of the HCA
38TheHCA Procedure Pseudocodes 1 and 2 show the HCApseudocode
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Journal of Optimization
In summary this paper aims to introduce a novel nature-inspired approach for dealing with COPs which also intro-duces a continuous number representation that is naturalfor the algorithm The cyclic nature of the algorithm con-tributes to self-organization as the system converges to thesolution through direct and indirect communication betweenparticles feedback from the environment and internal self-evaluation Finally our HCA addresses some of the weak-nesses of other similar algorithms HCA is inspired by the fullwater cycle not parts of it as exemplified by IWD andWCA
3 The Hydrological Cycle Algorithm
Thewater cycle also known as the hydrologic cycle describesthe continuous movement of water in nature [38] It isrenewable because water particles are constantly circulatingfrom the land to the sky and vice versaTherefore the amountof water remains constant Water drops move from one placeto another by natural physical processes such as evaporationprecipitation condensation and runoff In these processesthe water passes through different states
31 Inspiration The water cycle starts when surface watergets heated by the sun causingwater fromdifferent sources tochange into vapor and evaporate into the air Then the watervapor cools and condenses as it rises to become drops Thesedrops combine and collect with each other and eventuallybecome so saturated and dense that they fall back becauseof the force of gravity in a process called precipitation Theresulting water flows on the surface of the Earth into pondslakes or oceans where it again evaporates back into theatmosphere Then the entire cycle is repeated
In the flow (runoff) stage the water drops start movingfrom one location to another based on Earthrsquos gravity and theground topography When the water is scattered it chooses apathwith less soil and fewer obstacles Assuming no obstaclesexist water drops will take the shortest path towards thecenter of the Earth (ie oceans) owing to the force of gravityWater drops have force because of this gravity and this forcecauses changes in motion depending on the topology Asthe water flows in rivers and streams it picks up sedimentsand transports them away from their original locationTheseprocesses are known as erosion and deposition They help tocreate the topography of the ground by making new pathswith more soil removal or the fading of others as theybecome less likely to be chosen owing to too much soildeposition These processes are highly dependent on watervelocity For instance a river is continually picking up anddropping off sediments from one point to another Whenthe river flow is fast soil particles are picked up and theopposite happens when the river flow is slowMoreover thereis a strong relationship between the water velocity and thesuspension load (the amount of dissolved soil in the water)that it currently carries The water velocity is higher whenonly a small amount of soil is being carried and lower whenlarger amounts are carried
In addition the term ldquoflow raterdquo or ldquodischargerdquo can bedefined as the volume of water in a river passing a defined
point over a specific period Mathematically flow rate iscalculated based on the following equation [39]119876 = 119881 times 119860 997904rArr[119876 = 119881 times (119882 times 119863)] (3)
where119876 is flow rate119860 is cross-sectional area119881 is velocity119882is width and119863 is depthThe cross-sectional area is calculatedby multiplying the depth by the width of the stream Thevelocity of the flowing water is defined as the quantity ofdischarge divided by the cross-sectional areaTherefore thereis an inverse relationship between the velocity and the cross-sectional area [40] The velocity increases when the cross-sectional area is small and vice versa If we assume that theriver has a constant flow rate (velocity times cross-sectional area= constant) and that the width of the river is fixed then wecan conclude that the velocity is inversely proportional to thedepth of the river in which case the velocity decreases in deepwater and vice versa Therefore the amount of soil depositedincreases as the velocity decreases This explains why less soilis taken from deep water and more soil is taken from shallowwater A deep river will have water moving more slowly thana shallow river
In addition to river depth and force of gravity there areother factors that affect the flow velocity of water drops andcause them to accelerate or decelerate These factors includeobstructions amount of soil existing in the path topographyof the land variations in channel cross-sectional area and theamount of dissolved soil being carried (the suspension load)
Water evaporation increases with temperature withmax-imum evaporation at 100∘C (boiling point) Converselycondensation increases when the water vapor cools towards0∘CThe evaporation and condensation stages play importantroles in the water cycle Evaporation is necessary to ensurethat the system remains in balance In addition the evapo-ration process helps to decrease the temperature and keepsthe air moist Condensation is a very important process asit completes the water cycle When water vapor condensesit releases the same amount of thermal energy into theenvironment that was needed to make it a vapor Howeverin nature it is difficult to measure the precise evaporationrate owing to the existence of different sources of evaporationhence estimation techniques are required [38]
The condensation process starts when the water vaporrises into the sky then as the temperature decreases inthe higher layer of the atmosphere the particles with lesstemperature have more chance to condense [41] Figure 2(a)illustrates the situation where there are no attractions (orconnections) between the particles at high temperature Asthe temperature decreases the particles collide and agglom-erate to form small clusters leading to clouds as shown inFigure 2(b)
An interesting phenomenon in which some of the largewater drops may eliminate other smaller water drops occursduring the condensation process as some of the waterdropsmdashknown as collectorsmdashfall from a higher layer to alower layer in the atmosphere (in the cloud) Figure 3 depictsthe action of a water drop falling and colliding with smaller
Journal of Optimization 7
(a) Hot water (b) Cold water
Figure 2 Illustration of the effect of temperature on water drops
Figure 3 A collector water drop in action
water drops Consequently the water drop grows as a resultof coalescence with the other water drops [42]
Based on this phenomenon only water drops that aresufficiently large will survive the trip to the ground Of thenumerous water drops in the cloud only a portion will makeit to the ground
32 The Proposed Algorithm Given the brief description ofthe hydrological cycle above the IWD algorithm can beinterpreted as covering only one stage of the overall watercyclemdashthe flow stage Although the IWDalgorithm takes intoaccount some of the characteristics and factors that affectthe flowing water drops through rivers and the actions andreactions along the way the IWD algorithm neglects otherfactors that could be useful in solving optimization problemsthrough adoption of other key concepts taken from the fullhydrological process
Studying the full hydrological water cycle gave us theinspiration to design a new and potentially more powerfulalgorithm within which the original IWD algorithm can beconsidered a subpart We divide our algorithm into fourmain stages Flow (Runoff) Evaporation Condensation andPrecipitation
The HCA can be described formally as consisting of a setof artificial water drops (WDs) such that
HCA = WD1WD2WD3 WD119899 where 119899 ge 1 (4)
The characteristics associated with each water drop are asfollows
(i) 119881WD the velocity of the water drop
(ii) SoilWD the amount of soil the water drop carries(iii) 120595WD the solution quality of the water drop
The input to the HCA is a graph representation of a problemsrsquosolution spaceThe graph can be a fully connected undirectedgraph119866 = (119873 119864) where119873 is the set of nodes and 119864 is the setof edges between the nodes In order to find a solution thewater drops are distributed randomly over the nodes of thegraph and then traverse the graph (119866) according to variousrules via the edges (119864) searching for the best or optimalsolutionThe characteristics associated with each edge are theinitial amount of soil on the edge and the edge depth
The initial amount of soil is the same for all the edgesThe depth measures the distance from the water surface tothe riverbed and it can be calculated by dividing the lengthof the edge by the amount of soil Therefore the depth variesand depends on the amount of soil that exists on the pathand its length The depth of the path increases when moresoil is removed over the same length A deep path can beinterpreted as either the path being lengthy or there being lesssoil Figures 4 and 5 illustrate the relationship between pathdepth soil and path length
The HCA goes through a number of cycles and iterationsto find a solution to a problem One iteration is consideredcomplete when all water drops have generated solutionsbased on the problem constraints A water drop iterativelyconstructs a solution for the problemby continuouslymovingbetween the nodes Each iteration consists of specific steps(which will be explained below) On the other hand a cyclerepresents a varied number of iterations A cycle is consideredas being complete when the temperature reaches a specificvalue which makes the water drops evaporate condenseand precipitate again This procedure continues until thetermination condition is met
33 Flow Stage (Runoff) This stage represents the con-struction stage where the HCA explores the search spaceThis is carried out by allowing the water drops to flow(scattered or regrouped) in different directions and con-structing various solutions Each water drop constructs a
8 Journal of Optimization
Depth = 101000= 001
Soil = 500
Length = 10
Depth = 20500 = 004
Soil = 500
Length = 20
Figure 4 Depth increases with length increases for the same amount of soil
Depth = 10500= 002
Soil = 500
Length = 10
Soil = 1000
Length = 10
Depth = 101000= 001
Figure 5 Depth increases when the amount of soil is reduced over the same length
solution incrementally by moving from one point to anotherWhenever a water drop wants to move towards a new nodeit has to choose between different numbers of nodes (variousbranches) It calculates the probability of all the unvisitednodes and chooses the highest probability node taking intoconsideration the problem constraints This can be describedas a state transition rule that determines the next node tovisit In HCA the node with the highest probability will beselected The probability of the node is calculated using119875WD
119894 (119895)= 119891 (Soil (119894 119895))2 times 119892 (Depth (119894 119895))sum119896notinV119888(WD) (119891 (Soil (119894 119896))2 times 119892 (Depth (119894 119896))) (5)
where 119875WD119894(119895) is the probability of choosing node 119895 from
node 119894 119891(Soil(119894 119895)) is equal to the inverse of the soil between119894 and 119895 and is calculated using119891 (Soil (119894 119895)) = 1120576 + Soil (119894 119895) (6)
120576 = 001 is a small value that is used to prevent division byzero The second factor of the transition rule is the inverse ofdepth which is calculated based on119892 (Depth (119894 119895)) = 1
Depth (119894 119895) (7)
Depth(119894 119895) is the depth between nodes 119894 and 119895 and iscalculated by dividing the length of the path by the amountof soil This can be expressed as follows
Depth (119894 119895) = Length (119894 119895)Soil (119894 119895) (8)
The depth value can be normalized to be in the range [1ndash100]as it might be very small The depth of the path is constantlychanging because it is influenced by the amount of existing
S
A
B
S
A
B
S
A
B
Figure 6 The relationship between soil and depth over the samelength
soil where the depth is inversely proportional to the amountof the existing soil in the path of a fixed length Waterdrops will choose paths with less depth over other pathsThis enables exploration of new paths and diversifies thegenerated solutions Assume the following scenario (depictedby Figure 6) where water drops have to choose betweennodes 119860 or 119861 after node 119878 Initially both edges have the samelength and same amount of soil (line thickness represents soilamount) Consequently the depth values for the two edgeswill be the sameAfter somewater drops chose node119860 to visitthe edge (S-A) as a result has less soil and is deeper Accordingto the new state transition rule the next water drops will beforced to choose node 119861 because edge (S-B) will be shallowerthan (S-A)This technique provides a way to avoid stagnationand explore the search space efficiently
331 VelocityUpdate After selecting the next node thewaterdrop moves to the selected node and marks it as visited Eachwater drop has a variable velocity (V) The velocity of a waterdropmight be increased or decreasedwhile it ismoving basedon the factors mentioned above (the amount of soil existingon the path the depth of the path etc) Mathematically thevelocity of a water drop at time (119905 + 1) can be calculated using
119881WD119905+1 = [119870 times 119881WD
119905 ] + 120572( 119881WD
Soil (119894 119895)) + 2radic 119881WD
SoilWD
+ ( 100120595WD) + 2radic 119881WD
Depth (119894 119895) (9)
Journal of Optimization 9
Equation (9) defines how the water drop updates its velocityafter each movement Each part of the equation has an effecton the new velocity of the water drop where 119881WD
119905 is thecurrent water drop velocity and 119870 is a uniformly distributedrandom number between [0 1] that refers to the roughnesscoefficient The roughness coefficient represents factors thatmay cause the water drop velocity to decrease Owing todifficulties measuring the exact effect of these factors somerandomness is considered with the velocity
The second term of the expression in (9) reflects therelationship between the amount of soil existing on a path andthe velocity of a water drop The velocity of the water dropsincreases when they traverse paths with less soil Alpha (120572)is a relative influence coefficient that emphasizes this part in
the velocity update equationThe third term of the expressionrepresents the relationship between the amount of soil thata water drop is currently carrying and its velocity Waterdrop velocity increases with decreasing amount of soil beingcarriedThis gives weakwater drops a chance to increase theirvelocity as they do not holdmuch soilThe fourth term of theexpression refers to the inverse ratio of a water droprsquos fitnesswith velocity increasing with higher fitness The final term ofthe expression indicates that a water drop will be slower in adeep path than in a shallow path
332 Soil Update Next the amount of soil existing on thepath and the depth of that path are updated A water dropcan remove (or add) soil from (or to) a path while movingbased on its velocity This is expressed by
Soil (119894 119895) = [119875119873 lowast Soil (119894 119895)] minus ΔSoil (119894 119895) minus 2radic 1
Depth (119894 119895) if 119881WD ge Avg (all119881WDS) (Erosion)[119875119873 lowast Soil (119894 119895)] + ΔSoil (119894 119895) + 2radic 1
Depth (119894 119895) else (Deposition) (10)
119875119873 represents a coefficient (ie sediment transport rate orgradation coefficient) that may affect the reduction in theamount of soil The soil can be removed only if the currentwater drop velocity is greater than the average of all otherwater drops Otherwise an amount of soil is added (soildeposition) Consequently the water drop is able to transferan amount of soil from one place to another and usuallytransfers it from fast to slow parts of the path The increasingsoil amount on some paths favors the exploration of otherpaths during the search process and avoids entrapment inlocal optimal solutions The amount of soil existing betweennode 119894 and node 119895 is calculated and changed usingΔSoil (119894 119895) = 1
timeWD119894119895
(11)
such that
timeWD119894119895 = Distance (119894 119895)119881WD
119905+1
(12)
Equation (11) shows that the amount of soil being removedis inversely proportional to time Based on the second lawof motion time is equal to distance divided by velocityTherefore time is proportional to velocity and inverselyproportional to path length Furthermore the amount ofsoil being removed or added is inversely proportional to thedepth of the pathThis is because shallow soils will be easy toremove compared to deep soilsTherefore the amount of soilbeing removed will decrease as the path depth increasesThisfactor facilitates exploration of new paths because less soil isremoved from the deep paths as they have been used manytimes before This may extend the time needed to search forand reach optimal solutionsThedepth of the path needs to beupdatedwhen the amount of soil existing on the path changesusing (8)
333 Soil Transportation Update In the HCA we considerthe fact that the amount of soil a water drop carries reflects itssolution qualityThis can be done by associating the quality ofthe water droprsquos solution with its carrying soil value Figure 7illustrates the fact that a water drop with more carrying soilhas a better solution
To simulate this association the amount of soil that thewater drop is carrying is updated with a portion of the soilbeing removed from the path divided by the last solutionquality found by that water drop Therefore the water dropwith a better solution will carry more soil This can beexpressed as follows
SoilWD = SoilWD + ΔSoil (119894 119895)120595WD (13)
The idea behind this step is to let a water drop preserves itsprevious fitness value Later the amount of carrying soil valueis used in updating the velocity of the water drops
334 Temperature Update The final step that occurs at thisstage is updating of the temperature The new temperaturevalue depends on the solution quality generated by the waterdrops in the previous iterations The temperature will beupdated as follows
Temp (119905 + 1) = Temp (119905) + +ΔTemp (14)
where
ΔTemp = 120573 lowast (Temp (119905)Δ119863 ) Δ119863 gt 0Temp (119905)10 otherwise
(15)
10 Journal of Optimization
Better
Figure 7 Relationship between the amount of carrying soil and itsquality
and where coefficient 120573 is determined based on the problemThe difference (Δ119863) is calculated based onΔ119863 = MaxValue minusMinValue (16)
such that
MaxValue = max [Solutions Quality (WDs)] MinValue = min [Solutions Quality (WDs)] (17)
According to (16) increase in temperature will be affected bythe difference between the best solution (MinValue) and theworst solution (MaxValue) Less difference means that therewas not a lot of improvement in the last few iterations and asa result the temperature will increase more This means thatthere is no need to evaporate the water drops as long as theycan find different solutions in each iteration The flow stagewill repeat a few times until the temperature reaches a certainvalue When the temperature is sufficient the evaporationstage begins
34 Evaporation Stage In this stage the number of waterdrops that will evaporate (the evaporation rate) when thetemperature reaches a specific value is first determined Theevaporation rate is chosen randomly between one and thetotal number of water drops
ER = Random (1119873) (18)
where ER is evaporation rate and119873 is number ofwater dropsAccording to the evaporation rate a specific number of waterdrops is selected to evaporateThe selection is done using theroulette-wheel selection method taking into considerationthe solution quality of each water drop Only the selectedwater drops evaporate and go to the next stage
35 Condensation Stage As the temperature decreases waterdrops draw closer to each other and then collide and combineor bounce off These operations are useful as they allowthe water drops to be in direct physical contact with eachother This stage represents the collective and cooperativebehavior between all the water drops A water drop cangenerate a solution without direct communication (by itself)however communication between water drops may lead tothe generation of better solutions in the next cycle In HCAphysical contact is used for direct communication betweenwater drops whereas the amount of soil on the edges canbe considered indirect communication between the water
drops Direct communication (information exchange) hasbeen proven to improve solution quality significantly whenapplied by other algorithms [37]
The condensation stage is considered to be a problem-dependent stage that can be redesigned to fit the problemspecification For example one way of implementing thisstage to fit the Travelling Salesman Problem (TSP) is to uselocal improvement methods Using these methods enhancesthe solution quality and generates better results in the nextcycle (ie minimizes the total cost of the solution) with thehypothesis that it will reduce the number of iterations neededand help to reach the optimalnear-optimal solution fasterEquation (19) shows the evaporatedwater (EW) selected fromthe overall population where 119899 represents the evaporationrate (number of evaporated water drops)
EW = WD1WD2 WD119899 (19)
Initially all the possible combinations (as pairs) are selectedfrom the evaporated water drops to share their informationwith each other via collision When two water drops collidethere is a chance either of the two water drops merging(coalescence) or of them bouncing off each other In naturedetermining which operation will take place is based onfactors such as the temperature and the velocity of each waterdrop In this research we considered the similarity betweenthe water dropsrsquo solutions to determine which process willoccur When the similarity is greater than or equal to 50between two water dropsrsquo solutions they merge Otherwisethey collide and bounce off each other Mathematically thiscan be represented as follows
OP (WD1WD2)= Bounce (WD1WD2) Similarity lt 50
Merge (WD1WD2) Similarity ge 50 (20)
We use the Hamming distance [43] to count the number ofdifferent positions in two series that have the same length toobtain a measure of the similarity between water drops Forexample the Hamming distance between 12468 and 13458 istwoWhen twowater drops collide andmerge onewater drop(ie the collector) will becomemore powerful by eliminatingthe other one in the process also acquiring the characteristicsof the eliminated water drop (ie its velocity)
WD1Vel = WD1Vel +WD2Vel (21)
On the other hand when two water drops collide and bounceoff they will share information with each other about thegoodness of each node and how much a node contributes totheir solutions The bounce-off operation generates informa-tion that is used later to refine the quality of the water dropsrsquosolutions in the next cycle The information is available to allwater drops and helps them to choose a node that has a bettercontribution from all the possible nodes at the flow stageTheinformation consists of the weight of each node The weightmeasures a node occurrence within a solution over the waterdrop solution qualityThe idea behind sharing these details is
Journal of Optimization 11
to emphasize the best nodes found up to that point Withinthis exchange the water drops will favor those nodes in thenext cycle Mathematically the weight can be calculated asfollows
Weight (node) = NodeoccurrenceWDsol
(22)
Finally this stage is used to update the global-best solutionfound up to that point In addition at this stage thetemperature is reduced by 50∘C The lowering and raisingof the temperature helps to prevent the water drops fromsticking with the same solution every iteration
36 Precipitation Stage This stage is considered the termina-tion stage of the cycle as the algorithm has to check whetherthe termination condition is met If the condition has beenmet the algorithm stops with the last global-best solutionOtherwise this stage is responsible for reinitializing all thedynamic variables such as the amount of the soil on eachedge depth of paths the velocity of each water drop and theamount of soil it holds The reinitialization of the parametershelps the algorithm to avoid being trapped in local optimawhich may affect the algorithmrsquos performance in the nextcycle Moreover this stage is considered as a reinforcementstage which is used to place emphasis on the best water drop(the collector) This is achieved by reducing the amount ofsoil on the edges that belong to the best water drop solution
Soil (119894 119895) = 09 lowast soil (119894 119895) forall (119894 119895) isin BestWD (23)
The idea behind that is to favor these edges over the otheredges in the next cycle
37 Summarization of HCA Characteristics TheHCA can bedistinguished from other swarm algorithms in the followingways
(1) HCA involves a number of cycles and number ofiterations (multiple iterations per cycle)This gives theadvantage of performing certain tasks (eg solutionimprovements methods) every cycle instead of everyiteration to reduce the computational effort In addi-tion the cyclic nature of the algorithm contributesto self-organization as the system converges to thesolution through feedback from the environmentand internal self-evaluationThe temperature variablecontrols the cycle and its value is updated accordingto the quality of the solution obtained in everyiteration
(2) The flow of the water drops is controlled by pathsdepth and soil amount heuristics (indirect commu-nication) Paths depth factor affects the movement ofwater drops and helps to avoid choosing the samepaths by all water drops
(3) Water drops are able to gain or lose portion of theirvelocity This idea supports the competition betweenwater drops and prevents the sweep of one drop forthe rest of the drops because a high-velocity water
HCA procedure(i) Problem input(ii) Initialization of parameters(iii)While (termination condition is not met)
Cycle = Cycle + 1 Flow stageRepeat
(a) Iteration = iteration + 1(b) For each water drop
(1) Choose next node (Eqs (5)ndash(8))(2) Update velocity (Eq (9))(3) Update soil and depth (Eqs (10)-(11))(4) Update carrying soil (Eq (13))
(c) End for(d) Calculate the solution fitness(e) Update local optimal solution(f) Update Temperature (Eqs (14)ndash(17))
Until (Temperature = Evaporation Temperature)Evaporation (WDs)Condensation (WDs)Precipitation (WDs)
(iv) End while
Pseudocode 1 HCA pseudocode
drop can carve more paths (ie remove more soils)than slower one
(4) Soil can be deposited and removed which helpsto avoid a premature convergence and stimulatesthe spread of drops in different directions (moreexploration)
(5) Condensation stage is utilized to perform informa-tion sharing (direct communication) among thewaterdrops Moreover selecting a collector water droprepresents an elitism technique and that helps toperform exploitation for the good solutions Furtherthis stage can be used to improve the quality of theobtained solutions to reduce the number of iterations
(6) The evaporation process is a selection technique andhelps the algorithm to escape from local optimasolutionsThe evaporation happenswhen there are noimprovements in the quality of the obtained solutions
(7) Precipitation stage is used to reinitialize variables andredistribute WDs in the search space to generate newsolutions
Condensation evaporation and precipitation ((5) (6) and(7)) above distinguish HCA from other water-based algo-rithms including IWD and WCA These characteristics areimportant and help make a proper balance between explo-ration exploitation which helps in convergence towards theglobal solution In addition the stages of the water cycle arecomplementary to each other and help improve the overallperformance of the HCA
38TheHCA Procedure Pseudocodes 1 and 2 show the HCApseudocode
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 7
(a) Hot water (b) Cold water
Figure 2 Illustration of the effect of temperature on water drops
Figure 3 A collector water drop in action
water drops Consequently the water drop grows as a resultof coalescence with the other water drops [42]
Based on this phenomenon only water drops that aresufficiently large will survive the trip to the ground Of thenumerous water drops in the cloud only a portion will makeit to the ground
32 The Proposed Algorithm Given the brief description ofthe hydrological cycle above the IWD algorithm can beinterpreted as covering only one stage of the overall watercyclemdashthe flow stage Although the IWDalgorithm takes intoaccount some of the characteristics and factors that affectthe flowing water drops through rivers and the actions andreactions along the way the IWD algorithm neglects otherfactors that could be useful in solving optimization problemsthrough adoption of other key concepts taken from the fullhydrological process
Studying the full hydrological water cycle gave us theinspiration to design a new and potentially more powerfulalgorithm within which the original IWD algorithm can beconsidered a subpart We divide our algorithm into fourmain stages Flow (Runoff) Evaporation Condensation andPrecipitation
The HCA can be described formally as consisting of a setof artificial water drops (WDs) such that
HCA = WD1WD2WD3 WD119899 where 119899 ge 1 (4)
The characteristics associated with each water drop are asfollows
(i) 119881WD the velocity of the water drop
(ii) SoilWD the amount of soil the water drop carries(iii) 120595WD the solution quality of the water drop
The input to the HCA is a graph representation of a problemsrsquosolution spaceThe graph can be a fully connected undirectedgraph119866 = (119873 119864) where119873 is the set of nodes and 119864 is the setof edges between the nodes In order to find a solution thewater drops are distributed randomly over the nodes of thegraph and then traverse the graph (119866) according to variousrules via the edges (119864) searching for the best or optimalsolutionThe characteristics associated with each edge are theinitial amount of soil on the edge and the edge depth
The initial amount of soil is the same for all the edgesThe depth measures the distance from the water surface tothe riverbed and it can be calculated by dividing the lengthof the edge by the amount of soil Therefore the depth variesand depends on the amount of soil that exists on the pathand its length The depth of the path increases when moresoil is removed over the same length A deep path can beinterpreted as either the path being lengthy or there being lesssoil Figures 4 and 5 illustrate the relationship between pathdepth soil and path length
The HCA goes through a number of cycles and iterationsto find a solution to a problem One iteration is consideredcomplete when all water drops have generated solutionsbased on the problem constraints A water drop iterativelyconstructs a solution for the problemby continuouslymovingbetween the nodes Each iteration consists of specific steps(which will be explained below) On the other hand a cyclerepresents a varied number of iterations A cycle is consideredas being complete when the temperature reaches a specificvalue which makes the water drops evaporate condenseand precipitate again This procedure continues until thetermination condition is met
33 Flow Stage (Runoff) This stage represents the con-struction stage where the HCA explores the search spaceThis is carried out by allowing the water drops to flow(scattered or regrouped) in different directions and con-structing various solutions Each water drop constructs a
8 Journal of Optimization
Depth = 101000= 001
Soil = 500
Length = 10
Depth = 20500 = 004
Soil = 500
Length = 20
Figure 4 Depth increases with length increases for the same amount of soil
Depth = 10500= 002
Soil = 500
Length = 10
Soil = 1000
Length = 10
Depth = 101000= 001
Figure 5 Depth increases when the amount of soil is reduced over the same length
solution incrementally by moving from one point to anotherWhenever a water drop wants to move towards a new nodeit has to choose between different numbers of nodes (variousbranches) It calculates the probability of all the unvisitednodes and chooses the highest probability node taking intoconsideration the problem constraints This can be describedas a state transition rule that determines the next node tovisit In HCA the node with the highest probability will beselected The probability of the node is calculated using119875WD
119894 (119895)= 119891 (Soil (119894 119895))2 times 119892 (Depth (119894 119895))sum119896notinV119888(WD) (119891 (Soil (119894 119896))2 times 119892 (Depth (119894 119896))) (5)
where 119875WD119894(119895) is the probability of choosing node 119895 from
node 119894 119891(Soil(119894 119895)) is equal to the inverse of the soil between119894 and 119895 and is calculated using119891 (Soil (119894 119895)) = 1120576 + Soil (119894 119895) (6)
120576 = 001 is a small value that is used to prevent division byzero The second factor of the transition rule is the inverse ofdepth which is calculated based on119892 (Depth (119894 119895)) = 1
Depth (119894 119895) (7)
Depth(119894 119895) is the depth between nodes 119894 and 119895 and iscalculated by dividing the length of the path by the amountof soil This can be expressed as follows
Depth (119894 119895) = Length (119894 119895)Soil (119894 119895) (8)
The depth value can be normalized to be in the range [1ndash100]as it might be very small The depth of the path is constantlychanging because it is influenced by the amount of existing
S
A
B
S
A
B
S
A
B
Figure 6 The relationship between soil and depth over the samelength
soil where the depth is inversely proportional to the amountof the existing soil in the path of a fixed length Waterdrops will choose paths with less depth over other pathsThis enables exploration of new paths and diversifies thegenerated solutions Assume the following scenario (depictedby Figure 6) where water drops have to choose betweennodes 119860 or 119861 after node 119878 Initially both edges have the samelength and same amount of soil (line thickness represents soilamount) Consequently the depth values for the two edgeswill be the sameAfter somewater drops chose node119860 to visitthe edge (S-A) as a result has less soil and is deeper Accordingto the new state transition rule the next water drops will beforced to choose node 119861 because edge (S-B) will be shallowerthan (S-A)This technique provides a way to avoid stagnationand explore the search space efficiently
331 VelocityUpdate After selecting the next node thewaterdrop moves to the selected node and marks it as visited Eachwater drop has a variable velocity (V) The velocity of a waterdropmight be increased or decreasedwhile it ismoving basedon the factors mentioned above (the amount of soil existingon the path the depth of the path etc) Mathematically thevelocity of a water drop at time (119905 + 1) can be calculated using
119881WD119905+1 = [119870 times 119881WD
119905 ] + 120572( 119881WD
Soil (119894 119895)) + 2radic 119881WD
SoilWD
+ ( 100120595WD) + 2radic 119881WD
Depth (119894 119895) (9)
Journal of Optimization 9
Equation (9) defines how the water drop updates its velocityafter each movement Each part of the equation has an effecton the new velocity of the water drop where 119881WD
119905 is thecurrent water drop velocity and 119870 is a uniformly distributedrandom number between [0 1] that refers to the roughnesscoefficient The roughness coefficient represents factors thatmay cause the water drop velocity to decrease Owing todifficulties measuring the exact effect of these factors somerandomness is considered with the velocity
The second term of the expression in (9) reflects therelationship between the amount of soil existing on a path andthe velocity of a water drop The velocity of the water dropsincreases when they traverse paths with less soil Alpha (120572)is a relative influence coefficient that emphasizes this part in
the velocity update equationThe third term of the expressionrepresents the relationship between the amount of soil thata water drop is currently carrying and its velocity Waterdrop velocity increases with decreasing amount of soil beingcarriedThis gives weakwater drops a chance to increase theirvelocity as they do not holdmuch soilThe fourth term of theexpression refers to the inverse ratio of a water droprsquos fitnesswith velocity increasing with higher fitness The final term ofthe expression indicates that a water drop will be slower in adeep path than in a shallow path
332 Soil Update Next the amount of soil existing on thepath and the depth of that path are updated A water dropcan remove (or add) soil from (or to) a path while movingbased on its velocity This is expressed by
Soil (119894 119895) = [119875119873 lowast Soil (119894 119895)] minus ΔSoil (119894 119895) minus 2radic 1
Depth (119894 119895) if 119881WD ge Avg (all119881WDS) (Erosion)[119875119873 lowast Soil (119894 119895)] + ΔSoil (119894 119895) + 2radic 1
Depth (119894 119895) else (Deposition) (10)
119875119873 represents a coefficient (ie sediment transport rate orgradation coefficient) that may affect the reduction in theamount of soil The soil can be removed only if the currentwater drop velocity is greater than the average of all otherwater drops Otherwise an amount of soil is added (soildeposition) Consequently the water drop is able to transferan amount of soil from one place to another and usuallytransfers it from fast to slow parts of the path The increasingsoil amount on some paths favors the exploration of otherpaths during the search process and avoids entrapment inlocal optimal solutions The amount of soil existing betweennode 119894 and node 119895 is calculated and changed usingΔSoil (119894 119895) = 1
timeWD119894119895
(11)
such that
timeWD119894119895 = Distance (119894 119895)119881WD
119905+1
(12)
Equation (11) shows that the amount of soil being removedis inversely proportional to time Based on the second lawof motion time is equal to distance divided by velocityTherefore time is proportional to velocity and inverselyproportional to path length Furthermore the amount ofsoil being removed or added is inversely proportional to thedepth of the pathThis is because shallow soils will be easy toremove compared to deep soilsTherefore the amount of soilbeing removed will decrease as the path depth increasesThisfactor facilitates exploration of new paths because less soil isremoved from the deep paths as they have been used manytimes before This may extend the time needed to search forand reach optimal solutionsThedepth of the path needs to beupdatedwhen the amount of soil existing on the path changesusing (8)
333 Soil Transportation Update In the HCA we considerthe fact that the amount of soil a water drop carries reflects itssolution qualityThis can be done by associating the quality ofthe water droprsquos solution with its carrying soil value Figure 7illustrates the fact that a water drop with more carrying soilhas a better solution
To simulate this association the amount of soil that thewater drop is carrying is updated with a portion of the soilbeing removed from the path divided by the last solutionquality found by that water drop Therefore the water dropwith a better solution will carry more soil This can beexpressed as follows
SoilWD = SoilWD + ΔSoil (119894 119895)120595WD (13)
The idea behind this step is to let a water drop preserves itsprevious fitness value Later the amount of carrying soil valueis used in updating the velocity of the water drops
334 Temperature Update The final step that occurs at thisstage is updating of the temperature The new temperaturevalue depends on the solution quality generated by the waterdrops in the previous iterations The temperature will beupdated as follows
Temp (119905 + 1) = Temp (119905) + +ΔTemp (14)
where
ΔTemp = 120573 lowast (Temp (119905)Δ119863 ) Δ119863 gt 0Temp (119905)10 otherwise
(15)
10 Journal of Optimization
Better
Figure 7 Relationship between the amount of carrying soil and itsquality
and where coefficient 120573 is determined based on the problemThe difference (Δ119863) is calculated based onΔ119863 = MaxValue minusMinValue (16)
such that
MaxValue = max [Solutions Quality (WDs)] MinValue = min [Solutions Quality (WDs)] (17)
According to (16) increase in temperature will be affected bythe difference between the best solution (MinValue) and theworst solution (MaxValue) Less difference means that therewas not a lot of improvement in the last few iterations and asa result the temperature will increase more This means thatthere is no need to evaporate the water drops as long as theycan find different solutions in each iteration The flow stagewill repeat a few times until the temperature reaches a certainvalue When the temperature is sufficient the evaporationstage begins
34 Evaporation Stage In this stage the number of waterdrops that will evaporate (the evaporation rate) when thetemperature reaches a specific value is first determined Theevaporation rate is chosen randomly between one and thetotal number of water drops
ER = Random (1119873) (18)
where ER is evaporation rate and119873 is number ofwater dropsAccording to the evaporation rate a specific number of waterdrops is selected to evaporateThe selection is done using theroulette-wheel selection method taking into considerationthe solution quality of each water drop Only the selectedwater drops evaporate and go to the next stage
35 Condensation Stage As the temperature decreases waterdrops draw closer to each other and then collide and combineor bounce off These operations are useful as they allowthe water drops to be in direct physical contact with eachother This stage represents the collective and cooperativebehavior between all the water drops A water drop cangenerate a solution without direct communication (by itself)however communication between water drops may lead tothe generation of better solutions in the next cycle In HCAphysical contact is used for direct communication betweenwater drops whereas the amount of soil on the edges canbe considered indirect communication between the water
drops Direct communication (information exchange) hasbeen proven to improve solution quality significantly whenapplied by other algorithms [37]
The condensation stage is considered to be a problem-dependent stage that can be redesigned to fit the problemspecification For example one way of implementing thisstage to fit the Travelling Salesman Problem (TSP) is to uselocal improvement methods Using these methods enhancesthe solution quality and generates better results in the nextcycle (ie minimizes the total cost of the solution) with thehypothesis that it will reduce the number of iterations neededand help to reach the optimalnear-optimal solution fasterEquation (19) shows the evaporatedwater (EW) selected fromthe overall population where 119899 represents the evaporationrate (number of evaporated water drops)
EW = WD1WD2 WD119899 (19)
Initially all the possible combinations (as pairs) are selectedfrom the evaporated water drops to share their informationwith each other via collision When two water drops collidethere is a chance either of the two water drops merging(coalescence) or of them bouncing off each other In naturedetermining which operation will take place is based onfactors such as the temperature and the velocity of each waterdrop In this research we considered the similarity betweenthe water dropsrsquo solutions to determine which process willoccur When the similarity is greater than or equal to 50between two water dropsrsquo solutions they merge Otherwisethey collide and bounce off each other Mathematically thiscan be represented as follows
OP (WD1WD2)= Bounce (WD1WD2) Similarity lt 50
Merge (WD1WD2) Similarity ge 50 (20)
We use the Hamming distance [43] to count the number ofdifferent positions in two series that have the same length toobtain a measure of the similarity between water drops Forexample the Hamming distance between 12468 and 13458 istwoWhen twowater drops collide andmerge onewater drop(ie the collector) will becomemore powerful by eliminatingthe other one in the process also acquiring the characteristicsof the eliminated water drop (ie its velocity)
WD1Vel = WD1Vel +WD2Vel (21)
On the other hand when two water drops collide and bounceoff they will share information with each other about thegoodness of each node and how much a node contributes totheir solutions The bounce-off operation generates informa-tion that is used later to refine the quality of the water dropsrsquosolutions in the next cycle The information is available to allwater drops and helps them to choose a node that has a bettercontribution from all the possible nodes at the flow stageTheinformation consists of the weight of each node The weightmeasures a node occurrence within a solution over the waterdrop solution qualityThe idea behind sharing these details is
Journal of Optimization 11
to emphasize the best nodes found up to that point Withinthis exchange the water drops will favor those nodes in thenext cycle Mathematically the weight can be calculated asfollows
Weight (node) = NodeoccurrenceWDsol
(22)
Finally this stage is used to update the global-best solutionfound up to that point In addition at this stage thetemperature is reduced by 50∘C The lowering and raisingof the temperature helps to prevent the water drops fromsticking with the same solution every iteration
36 Precipitation Stage This stage is considered the termina-tion stage of the cycle as the algorithm has to check whetherthe termination condition is met If the condition has beenmet the algorithm stops with the last global-best solutionOtherwise this stage is responsible for reinitializing all thedynamic variables such as the amount of the soil on eachedge depth of paths the velocity of each water drop and theamount of soil it holds The reinitialization of the parametershelps the algorithm to avoid being trapped in local optimawhich may affect the algorithmrsquos performance in the nextcycle Moreover this stage is considered as a reinforcementstage which is used to place emphasis on the best water drop(the collector) This is achieved by reducing the amount ofsoil on the edges that belong to the best water drop solution
Soil (119894 119895) = 09 lowast soil (119894 119895) forall (119894 119895) isin BestWD (23)
The idea behind that is to favor these edges over the otheredges in the next cycle
37 Summarization of HCA Characteristics TheHCA can bedistinguished from other swarm algorithms in the followingways
(1) HCA involves a number of cycles and number ofiterations (multiple iterations per cycle)This gives theadvantage of performing certain tasks (eg solutionimprovements methods) every cycle instead of everyiteration to reduce the computational effort In addi-tion the cyclic nature of the algorithm contributesto self-organization as the system converges to thesolution through feedback from the environmentand internal self-evaluationThe temperature variablecontrols the cycle and its value is updated accordingto the quality of the solution obtained in everyiteration
(2) The flow of the water drops is controlled by pathsdepth and soil amount heuristics (indirect commu-nication) Paths depth factor affects the movement ofwater drops and helps to avoid choosing the samepaths by all water drops
(3) Water drops are able to gain or lose portion of theirvelocity This idea supports the competition betweenwater drops and prevents the sweep of one drop forthe rest of the drops because a high-velocity water
HCA procedure(i) Problem input(ii) Initialization of parameters(iii)While (termination condition is not met)
Cycle = Cycle + 1 Flow stageRepeat
(a) Iteration = iteration + 1(b) For each water drop
(1) Choose next node (Eqs (5)ndash(8))(2) Update velocity (Eq (9))(3) Update soil and depth (Eqs (10)-(11))(4) Update carrying soil (Eq (13))
(c) End for(d) Calculate the solution fitness(e) Update local optimal solution(f) Update Temperature (Eqs (14)ndash(17))
Until (Temperature = Evaporation Temperature)Evaporation (WDs)Condensation (WDs)Precipitation (WDs)
(iv) End while
Pseudocode 1 HCA pseudocode
drop can carve more paths (ie remove more soils)than slower one
(4) Soil can be deposited and removed which helpsto avoid a premature convergence and stimulatesthe spread of drops in different directions (moreexploration)
(5) Condensation stage is utilized to perform informa-tion sharing (direct communication) among thewaterdrops Moreover selecting a collector water droprepresents an elitism technique and that helps toperform exploitation for the good solutions Furtherthis stage can be used to improve the quality of theobtained solutions to reduce the number of iterations
(6) The evaporation process is a selection technique andhelps the algorithm to escape from local optimasolutionsThe evaporation happenswhen there are noimprovements in the quality of the obtained solutions
(7) Precipitation stage is used to reinitialize variables andredistribute WDs in the search space to generate newsolutions
Condensation evaporation and precipitation ((5) (6) and(7)) above distinguish HCA from other water-based algo-rithms including IWD and WCA These characteristics areimportant and help make a proper balance between explo-ration exploitation which helps in convergence towards theglobal solution In addition the stages of the water cycle arecomplementary to each other and help improve the overallperformance of the HCA
38TheHCA Procedure Pseudocodes 1 and 2 show the HCApseudocode
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
8 Journal of Optimization
Depth = 101000= 001
Soil = 500
Length = 10
Depth = 20500 = 004
Soil = 500
Length = 20
Figure 4 Depth increases with length increases for the same amount of soil
Depth = 10500= 002
Soil = 500
Length = 10
Soil = 1000
Length = 10
Depth = 101000= 001
Figure 5 Depth increases when the amount of soil is reduced over the same length
solution incrementally by moving from one point to anotherWhenever a water drop wants to move towards a new nodeit has to choose between different numbers of nodes (variousbranches) It calculates the probability of all the unvisitednodes and chooses the highest probability node taking intoconsideration the problem constraints This can be describedas a state transition rule that determines the next node tovisit In HCA the node with the highest probability will beselected The probability of the node is calculated using119875WD
119894 (119895)= 119891 (Soil (119894 119895))2 times 119892 (Depth (119894 119895))sum119896notinV119888(WD) (119891 (Soil (119894 119896))2 times 119892 (Depth (119894 119896))) (5)
where 119875WD119894(119895) is the probability of choosing node 119895 from
node 119894 119891(Soil(119894 119895)) is equal to the inverse of the soil between119894 and 119895 and is calculated using119891 (Soil (119894 119895)) = 1120576 + Soil (119894 119895) (6)
120576 = 001 is a small value that is used to prevent division byzero The second factor of the transition rule is the inverse ofdepth which is calculated based on119892 (Depth (119894 119895)) = 1
Depth (119894 119895) (7)
Depth(119894 119895) is the depth between nodes 119894 and 119895 and iscalculated by dividing the length of the path by the amountof soil This can be expressed as follows
Depth (119894 119895) = Length (119894 119895)Soil (119894 119895) (8)
The depth value can be normalized to be in the range [1ndash100]as it might be very small The depth of the path is constantlychanging because it is influenced by the amount of existing
S
A
B
S
A
B
S
A
B
Figure 6 The relationship between soil and depth over the samelength
soil where the depth is inversely proportional to the amountof the existing soil in the path of a fixed length Waterdrops will choose paths with less depth over other pathsThis enables exploration of new paths and diversifies thegenerated solutions Assume the following scenario (depictedby Figure 6) where water drops have to choose betweennodes 119860 or 119861 after node 119878 Initially both edges have the samelength and same amount of soil (line thickness represents soilamount) Consequently the depth values for the two edgeswill be the sameAfter somewater drops chose node119860 to visitthe edge (S-A) as a result has less soil and is deeper Accordingto the new state transition rule the next water drops will beforced to choose node 119861 because edge (S-B) will be shallowerthan (S-A)This technique provides a way to avoid stagnationand explore the search space efficiently
331 VelocityUpdate After selecting the next node thewaterdrop moves to the selected node and marks it as visited Eachwater drop has a variable velocity (V) The velocity of a waterdropmight be increased or decreasedwhile it ismoving basedon the factors mentioned above (the amount of soil existingon the path the depth of the path etc) Mathematically thevelocity of a water drop at time (119905 + 1) can be calculated using
119881WD119905+1 = [119870 times 119881WD
119905 ] + 120572( 119881WD
Soil (119894 119895)) + 2radic 119881WD
SoilWD
+ ( 100120595WD) + 2radic 119881WD
Depth (119894 119895) (9)
Journal of Optimization 9
Equation (9) defines how the water drop updates its velocityafter each movement Each part of the equation has an effecton the new velocity of the water drop where 119881WD
119905 is thecurrent water drop velocity and 119870 is a uniformly distributedrandom number between [0 1] that refers to the roughnesscoefficient The roughness coefficient represents factors thatmay cause the water drop velocity to decrease Owing todifficulties measuring the exact effect of these factors somerandomness is considered with the velocity
The second term of the expression in (9) reflects therelationship between the amount of soil existing on a path andthe velocity of a water drop The velocity of the water dropsincreases when they traverse paths with less soil Alpha (120572)is a relative influence coefficient that emphasizes this part in
the velocity update equationThe third term of the expressionrepresents the relationship between the amount of soil thata water drop is currently carrying and its velocity Waterdrop velocity increases with decreasing amount of soil beingcarriedThis gives weakwater drops a chance to increase theirvelocity as they do not holdmuch soilThe fourth term of theexpression refers to the inverse ratio of a water droprsquos fitnesswith velocity increasing with higher fitness The final term ofthe expression indicates that a water drop will be slower in adeep path than in a shallow path
332 Soil Update Next the amount of soil existing on thepath and the depth of that path are updated A water dropcan remove (or add) soil from (or to) a path while movingbased on its velocity This is expressed by
Soil (119894 119895) = [119875119873 lowast Soil (119894 119895)] minus ΔSoil (119894 119895) minus 2radic 1
Depth (119894 119895) if 119881WD ge Avg (all119881WDS) (Erosion)[119875119873 lowast Soil (119894 119895)] + ΔSoil (119894 119895) + 2radic 1
Depth (119894 119895) else (Deposition) (10)
119875119873 represents a coefficient (ie sediment transport rate orgradation coefficient) that may affect the reduction in theamount of soil The soil can be removed only if the currentwater drop velocity is greater than the average of all otherwater drops Otherwise an amount of soil is added (soildeposition) Consequently the water drop is able to transferan amount of soil from one place to another and usuallytransfers it from fast to slow parts of the path The increasingsoil amount on some paths favors the exploration of otherpaths during the search process and avoids entrapment inlocal optimal solutions The amount of soil existing betweennode 119894 and node 119895 is calculated and changed usingΔSoil (119894 119895) = 1
timeWD119894119895
(11)
such that
timeWD119894119895 = Distance (119894 119895)119881WD
119905+1
(12)
Equation (11) shows that the amount of soil being removedis inversely proportional to time Based on the second lawof motion time is equal to distance divided by velocityTherefore time is proportional to velocity and inverselyproportional to path length Furthermore the amount ofsoil being removed or added is inversely proportional to thedepth of the pathThis is because shallow soils will be easy toremove compared to deep soilsTherefore the amount of soilbeing removed will decrease as the path depth increasesThisfactor facilitates exploration of new paths because less soil isremoved from the deep paths as they have been used manytimes before This may extend the time needed to search forand reach optimal solutionsThedepth of the path needs to beupdatedwhen the amount of soil existing on the path changesusing (8)
333 Soil Transportation Update In the HCA we considerthe fact that the amount of soil a water drop carries reflects itssolution qualityThis can be done by associating the quality ofthe water droprsquos solution with its carrying soil value Figure 7illustrates the fact that a water drop with more carrying soilhas a better solution
To simulate this association the amount of soil that thewater drop is carrying is updated with a portion of the soilbeing removed from the path divided by the last solutionquality found by that water drop Therefore the water dropwith a better solution will carry more soil This can beexpressed as follows
SoilWD = SoilWD + ΔSoil (119894 119895)120595WD (13)
The idea behind this step is to let a water drop preserves itsprevious fitness value Later the amount of carrying soil valueis used in updating the velocity of the water drops
334 Temperature Update The final step that occurs at thisstage is updating of the temperature The new temperaturevalue depends on the solution quality generated by the waterdrops in the previous iterations The temperature will beupdated as follows
Temp (119905 + 1) = Temp (119905) + +ΔTemp (14)
where
ΔTemp = 120573 lowast (Temp (119905)Δ119863 ) Δ119863 gt 0Temp (119905)10 otherwise
(15)
10 Journal of Optimization
Better
Figure 7 Relationship between the amount of carrying soil and itsquality
and where coefficient 120573 is determined based on the problemThe difference (Δ119863) is calculated based onΔ119863 = MaxValue minusMinValue (16)
such that
MaxValue = max [Solutions Quality (WDs)] MinValue = min [Solutions Quality (WDs)] (17)
According to (16) increase in temperature will be affected bythe difference between the best solution (MinValue) and theworst solution (MaxValue) Less difference means that therewas not a lot of improvement in the last few iterations and asa result the temperature will increase more This means thatthere is no need to evaporate the water drops as long as theycan find different solutions in each iteration The flow stagewill repeat a few times until the temperature reaches a certainvalue When the temperature is sufficient the evaporationstage begins
34 Evaporation Stage In this stage the number of waterdrops that will evaporate (the evaporation rate) when thetemperature reaches a specific value is first determined Theevaporation rate is chosen randomly between one and thetotal number of water drops
ER = Random (1119873) (18)
where ER is evaporation rate and119873 is number ofwater dropsAccording to the evaporation rate a specific number of waterdrops is selected to evaporateThe selection is done using theroulette-wheel selection method taking into considerationthe solution quality of each water drop Only the selectedwater drops evaporate and go to the next stage
35 Condensation Stage As the temperature decreases waterdrops draw closer to each other and then collide and combineor bounce off These operations are useful as they allowthe water drops to be in direct physical contact with eachother This stage represents the collective and cooperativebehavior between all the water drops A water drop cangenerate a solution without direct communication (by itself)however communication between water drops may lead tothe generation of better solutions in the next cycle In HCAphysical contact is used for direct communication betweenwater drops whereas the amount of soil on the edges canbe considered indirect communication between the water
drops Direct communication (information exchange) hasbeen proven to improve solution quality significantly whenapplied by other algorithms [37]
The condensation stage is considered to be a problem-dependent stage that can be redesigned to fit the problemspecification For example one way of implementing thisstage to fit the Travelling Salesman Problem (TSP) is to uselocal improvement methods Using these methods enhancesthe solution quality and generates better results in the nextcycle (ie minimizes the total cost of the solution) with thehypothesis that it will reduce the number of iterations neededand help to reach the optimalnear-optimal solution fasterEquation (19) shows the evaporatedwater (EW) selected fromthe overall population where 119899 represents the evaporationrate (number of evaporated water drops)
EW = WD1WD2 WD119899 (19)
Initially all the possible combinations (as pairs) are selectedfrom the evaporated water drops to share their informationwith each other via collision When two water drops collidethere is a chance either of the two water drops merging(coalescence) or of them bouncing off each other In naturedetermining which operation will take place is based onfactors such as the temperature and the velocity of each waterdrop In this research we considered the similarity betweenthe water dropsrsquo solutions to determine which process willoccur When the similarity is greater than or equal to 50between two water dropsrsquo solutions they merge Otherwisethey collide and bounce off each other Mathematically thiscan be represented as follows
OP (WD1WD2)= Bounce (WD1WD2) Similarity lt 50
Merge (WD1WD2) Similarity ge 50 (20)
We use the Hamming distance [43] to count the number ofdifferent positions in two series that have the same length toobtain a measure of the similarity between water drops Forexample the Hamming distance between 12468 and 13458 istwoWhen twowater drops collide andmerge onewater drop(ie the collector) will becomemore powerful by eliminatingthe other one in the process also acquiring the characteristicsof the eliminated water drop (ie its velocity)
WD1Vel = WD1Vel +WD2Vel (21)
On the other hand when two water drops collide and bounceoff they will share information with each other about thegoodness of each node and how much a node contributes totheir solutions The bounce-off operation generates informa-tion that is used later to refine the quality of the water dropsrsquosolutions in the next cycle The information is available to allwater drops and helps them to choose a node that has a bettercontribution from all the possible nodes at the flow stageTheinformation consists of the weight of each node The weightmeasures a node occurrence within a solution over the waterdrop solution qualityThe idea behind sharing these details is
Journal of Optimization 11
to emphasize the best nodes found up to that point Withinthis exchange the water drops will favor those nodes in thenext cycle Mathematically the weight can be calculated asfollows
Weight (node) = NodeoccurrenceWDsol
(22)
Finally this stage is used to update the global-best solutionfound up to that point In addition at this stage thetemperature is reduced by 50∘C The lowering and raisingof the temperature helps to prevent the water drops fromsticking with the same solution every iteration
36 Precipitation Stage This stage is considered the termina-tion stage of the cycle as the algorithm has to check whetherthe termination condition is met If the condition has beenmet the algorithm stops with the last global-best solutionOtherwise this stage is responsible for reinitializing all thedynamic variables such as the amount of the soil on eachedge depth of paths the velocity of each water drop and theamount of soil it holds The reinitialization of the parametershelps the algorithm to avoid being trapped in local optimawhich may affect the algorithmrsquos performance in the nextcycle Moreover this stage is considered as a reinforcementstage which is used to place emphasis on the best water drop(the collector) This is achieved by reducing the amount ofsoil on the edges that belong to the best water drop solution
Soil (119894 119895) = 09 lowast soil (119894 119895) forall (119894 119895) isin BestWD (23)
The idea behind that is to favor these edges over the otheredges in the next cycle
37 Summarization of HCA Characteristics TheHCA can bedistinguished from other swarm algorithms in the followingways
(1) HCA involves a number of cycles and number ofiterations (multiple iterations per cycle)This gives theadvantage of performing certain tasks (eg solutionimprovements methods) every cycle instead of everyiteration to reduce the computational effort In addi-tion the cyclic nature of the algorithm contributesto self-organization as the system converges to thesolution through feedback from the environmentand internal self-evaluationThe temperature variablecontrols the cycle and its value is updated accordingto the quality of the solution obtained in everyiteration
(2) The flow of the water drops is controlled by pathsdepth and soil amount heuristics (indirect commu-nication) Paths depth factor affects the movement ofwater drops and helps to avoid choosing the samepaths by all water drops
(3) Water drops are able to gain or lose portion of theirvelocity This idea supports the competition betweenwater drops and prevents the sweep of one drop forthe rest of the drops because a high-velocity water
HCA procedure(i) Problem input(ii) Initialization of parameters(iii)While (termination condition is not met)
Cycle = Cycle + 1 Flow stageRepeat
(a) Iteration = iteration + 1(b) For each water drop
(1) Choose next node (Eqs (5)ndash(8))(2) Update velocity (Eq (9))(3) Update soil and depth (Eqs (10)-(11))(4) Update carrying soil (Eq (13))
(c) End for(d) Calculate the solution fitness(e) Update local optimal solution(f) Update Temperature (Eqs (14)ndash(17))
Until (Temperature = Evaporation Temperature)Evaporation (WDs)Condensation (WDs)Precipitation (WDs)
(iv) End while
Pseudocode 1 HCA pseudocode
drop can carve more paths (ie remove more soils)than slower one
(4) Soil can be deposited and removed which helpsto avoid a premature convergence and stimulatesthe spread of drops in different directions (moreexploration)
(5) Condensation stage is utilized to perform informa-tion sharing (direct communication) among thewaterdrops Moreover selecting a collector water droprepresents an elitism technique and that helps toperform exploitation for the good solutions Furtherthis stage can be used to improve the quality of theobtained solutions to reduce the number of iterations
(6) The evaporation process is a selection technique andhelps the algorithm to escape from local optimasolutionsThe evaporation happenswhen there are noimprovements in the quality of the obtained solutions
(7) Precipitation stage is used to reinitialize variables andredistribute WDs in the search space to generate newsolutions
Condensation evaporation and precipitation ((5) (6) and(7)) above distinguish HCA from other water-based algo-rithms including IWD and WCA These characteristics areimportant and help make a proper balance between explo-ration exploitation which helps in convergence towards theglobal solution In addition the stages of the water cycle arecomplementary to each other and help improve the overallperformance of the HCA
38TheHCA Procedure Pseudocodes 1 and 2 show the HCApseudocode
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 9
Equation (9) defines how the water drop updates its velocityafter each movement Each part of the equation has an effecton the new velocity of the water drop where 119881WD
119905 is thecurrent water drop velocity and 119870 is a uniformly distributedrandom number between [0 1] that refers to the roughnesscoefficient The roughness coefficient represents factors thatmay cause the water drop velocity to decrease Owing todifficulties measuring the exact effect of these factors somerandomness is considered with the velocity
The second term of the expression in (9) reflects therelationship between the amount of soil existing on a path andthe velocity of a water drop The velocity of the water dropsincreases when they traverse paths with less soil Alpha (120572)is a relative influence coefficient that emphasizes this part in
the velocity update equationThe third term of the expressionrepresents the relationship between the amount of soil thata water drop is currently carrying and its velocity Waterdrop velocity increases with decreasing amount of soil beingcarriedThis gives weakwater drops a chance to increase theirvelocity as they do not holdmuch soilThe fourth term of theexpression refers to the inverse ratio of a water droprsquos fitnesswith velocity increasing with higher fitness The final term ofthe expression indicates that a water drop will be slower in adeep path than in a shallow path
332 Soil Update Next the amount of soil existing on thepath and the depth of that path are updated A water dropcan remove (or add) soil from (or to) a path while movingbased on its velocity This is expressed by
Soil (119894 119895) = [119875119873 lowast Soil (119894 119895)] minus ΔSoil (119894 119895) minus 2radic 1
Depth (119894 119895) if 119881WD ge Avg (all119881WDS) (Erosion)[119875119873 lowast Soil (119894 119895)] + ΔSoil (119894 119895) + 2radic 1
Depth (119894 119895) else (Deposition) (10)
119875119873 represents a coefficient (ie sediment transport rate orgradation coefficient) that may affect the reduction in theamount of soil The soil can be removed only if the currentwater drop velocity is greater than the average of all otherwater drops Otherwise an amount of soil is added (soildeposition) Consequently the water drop is able to transferan amount of soil from one place to another and usuallytransfers it from fast to slow parts of the path The increasingsoil amount on some paths favors the exploration of otherpaths during the search process and avoids entrapment inlocal optimal solutions The amount of soil existing betweennode 119894 and node 119895 is calculated and changed usingΔSoil (119894 119895) = 1
timeWD119894119895
(11)
such that
timeWD119894119895 = Distance (119894 119895)119881WD
119905+1
(12)
Equation (11) shows that the amount of soil being removedis inversely proportional to time Based on the second lawof motion time is equal to distance divided by velocityTherefore time is proportional to velocity and inverselyproportional to path length Furthermore the amount ofsoil being removed or added is inversely proportional to thedepth of the pathThis is because shallow soils will be easy toremove compared to deep soilsTherefore the amount of soilbeing removed will decrease as the path depth increasesThisfactor facilitates exploration of new paths because less soil isremoved from the deep paths as they have been used manytimes before This may extend the time needed to search forand reach optimal solutionsThedepth of the path needs to beupdatedwhen the amount of soil existing on the path changesusing (8)
333 Soil Transportation Update In the HCA we considerthe fact that the amount of soil a water drop carries reflects itssolution qualityThis can be done by associating the quality ofthe water droprsquos solution with its carrying soil value Figure 7illustrates the fact that a water drop with more carrying soilhas a better solution
To simulate this association the amount of soil that thewater drop is carrying is updated with a portion of the soilbeing removed from the path divided by the last solutionquality found by that water drop Therefore the water dropwith a better solution will carry more soil This can beexpressed as follows
SoilWD = SoilWD + ΔSoil (119894 119895)120595WD (13)
The idea behind this step is to let a water drop preserves itsprevious fitness value Later the amount of carrying soil valueis used in updating the velocity of the water drops
334 Temperature Update The final step that occurs at thisstage is updating of the temperature The new temperaturevalue depends on the solution quality generated by the waterdrops in the previous iterations The temperature will beupdated as follows
Temp (119905 + 1) = Temp (119905) + +ΔTemp (14)
where
ΔTemp = 120573 lowast (Temp (119905)Δ119863 ) Δ119863 gt 0Temp (119905)10 otherwise
(15)
10 Journal of Optimization
Better
Figure 7 Relationship between the amount of carrying soil and itsquality
and where coefficient 120573 is determined based on the problemThe difference (Δ119863) is calculated based onΔ119863 = MaxValue minusMinValue (16)
such that
MaxValue = max [Solutions Quality (WDs)] MinValue = min [Solutions Quality (WDs)] (17)
According to (16) increase in temperature will be affected bythe difference between the best solution (MinValue) and theworst solution (MaxValue) Less difference means that therewas not a lot of improvement in the last few iterations and asa result the temperature will increase more This means thatthere is no need to evaporate the water drops as long as theycan find different solutions in each iteration The flow stagewill repeat a few times until the temperature reaches a certainvalue When the temperature is sufficient the evaporationstage begins
34 Evaporation Stage In this stage the number of waterdrops that will evaporate (the evaporation rate) when thetemperature reaches a specific value is first determined Theevaporation rate is chosen randomly between one and thetotal number of water drops
ER = Random (1119873) (18)
where ER is evaporation rate and119873 is number ofwater dropsAccording to the evaporation rate a specific number of waterdrops is selected to evaporateThe selection is done using theroulette-wheel selection method taking into considerationthe solution quality of each water drop Only the selectedwater drops evaporate and go to the next stage
35 Condensation Stage As the temperature decreases waterdrops draw closer to each other and then collide and combineor bounce off These operations are useful as they allowthe water drops to be in direct physical contact with eachother This stage represents the collective and cooperativebehavior between all the water drops A water drop cangenerate a solution without direct communication (by itself)however communication between water drops may lead tothe generation of better solutions in the next cycle In HCAphysical contact is used for direct communication betweenwater drops whereas the amount of soil on the edges canbe considered indirect communication between the water
drops Direct communication (information exchange) hasbeen proven to improve solution quality significantly whenapplied by other algorithms [37]
The condensation stage is considered to be a problem-dependent stage that can be redesigned to fit the problemspecification For example one way of implementing thisstage to fit the Travelling Salesman Problem (TSP) is to uselocal improvement methods Using these methods enhancesthe solution quality and generates better results in the nextcycle (ie minimizes the total cost of the solution) with thehypothesis that it will reduce the number of iterations neededand help to reach the optimalnear-optimal solution fasterEquation (19) shows the evaporatedwater (EW) selected fromthe overall population where 119899 represents the evaporationrate (number of evaporated water drops)
EW = WD1WD2 WD119899 (19)
Initially all the possible combinations (as pairs) are selectedfrom the evaporated water drops to share their informationwith each other via collision When two water drops collidethere is a chance either of the two water drops merging(coalescence) or of them bouncing off each other In naturedetermining which operation will take place is based onfactors such as the temperature and the velocity of each waterdrop In this research we considered the similarity betweenthe water dropsrsquo solutions to determine which process willoccur When the similarity is greater than or equal to 50between two water dropsrsquo solutions they merge Otherwisethey collide and bounce off each other Mathematically thiscan be represented as follows
OP (WD1WD2)= Bounce (WD1WD2) Similarity lt 50
Merge (WD1WD2) Similarity ge 50 (20)
We use the Hamming distance [43] to count the number ofdifferent positions in two series that have the same length toobtain a measure of the similarity between water drops Forexample the Hamming distance between 12468 and 13458 istwoWhen twowater drops collide andmerge onewater drop(ie the collector) will becomemore powerful by eliminatingthe other one in the process also acquiring the characteristicsof the eliminated water drop (ie its velocity)
WD1Vel = WD1Vel +WD2Vel (21)
On the other hand when two water drops collide and bounceoff they will share information with each other about thegoodness of each node and how much a node contributes totheir solutions The bounce-off operation generates informa-tion that is used later to refine the quality of the water dropsrsquosolutions in the next cycle The information is available to allwater drops and helps them to choose a node that has a bettercontribution from all the possible nodes at the flow stageTheinformation consists of the weight of each node The weightmeasures a node occurrence within a solution over the waterdrop solution qualityThe idea behind sharing these details is
Journal of Optimization 11
to emphasize the best nodes found up to that point Withinthis exchange the water drops will favor those nodes in thenext cycle Mathematically the weight can be calculated asfollows
Weight (node) = NodeoccurrenceWDsol
(22)
Finally this stage is used to update the global-best solutionfound up to that point In addition at this stage thetemperature is reduced by 50∘C The lowering and raisingof the temperature helps to prevent the water drops fromsticking with the same solution every iteration
36 Precipitation Stage This stage is considered the termina-tion stage of the cycle as the algorithm has to check whetherthe termination condition is met If the condition has beenmet the algorithm stops with the last global-best solutionOtherwise this stage is responsible for reinitializing all thedynamic variables such as the amount of the soil on eachedge depth of paths the velocity of each water drop and theamount of soil it holds The reinitialization of the parametershelps the algorithm to avoid being trapped in local optimawhich may affect the algorithmrsquos performance in the nextcycle Moreover this stage is considered as a reinforcementstage which is used to place emphasis on the best water drop(the collector) This is achieved by reducing the amount ofsoil on the edges that belong to the best water drop solution
Soil (119894 119895) = 09 lowast soil (119894 119895) forall (119894 119895) isin BestWD (23)
The idea behind that is to favor these edges over the otheredges in the next cycle
37 Summarization of HCA Characteristics TheHCA can bedistinguished from other swarm algorithms in the followingways
(1) HCA involves a number of cycles and number ofiterations (multiple iterations per cycle)This gives theadvantage of performing certain tasks (eg solutionimprovements methods) every cycle instead of everyiteration to reduce the computational effort In addi-tion the cyclic nature of the algorithm contributesto self-organization as the system converges to thesolution through feedback from the environmentand internal self-evaluationThe temperature variablecontrols the cycle and its value is updated accordingto the quality of the solution obtained in everyiteration
(2) The flow of the water drops is controlled by pathsdepth and soil amount heuristics (indirect commu-nication) Paths depth factor affects the movement ofwater drops and helps to avoid choosing the samepaths by all water drops
(3) Water drops are able to gain or lose portion of theirvelocity This idea supports the competition betweenwater drops and prevents the sweep of one drop forthe rest of the drops because a high-velocity water
HCA procedure(i) Problem input(ii) Initialization of parameters(iii)While (termination condition is not met)
Cycle = Cycle + 1 Flow stageRepeat
(a) Iteration = iteration + 1(b) For each water drop
(1) Choose next node (Eqs (5)ndash(8))(2) Update velocity (Eq (9))(3) Update soil and depth (Eqs (10)-(11))(4) Update carrying soil (Eq (13))
(c) End for(d) Calculate the solution fitness(e) Update local optimal solution(f) Update Temperature (Eqs (14)ndash(17))
Until (Temperature = Evaporation Temperature)Evaporation (WDs)Condensation (WDs)Precipitation (WDs)
(iv) End while
Pseudocode 1 HCA pseudocode
drop can carve more paths (ie remove more soils)than slower one
(4) Soil can be deposited and removed which helpsto avoid a premature convergence and stimulatesthe spread of drops in different directions (moreexploration)
(5) Condensation stage is utilized to perform informa-tion sharing (direct communication) among thewaterdrops Moreover selecting a collector water droprepresents an elitism technique and that helps toperform exploitation for the good solutions Furtherthis stage can be used to improve the quality of theobtained solutions to reduce the number of iterations
(6) The evaporation process is a selection technique andhelps the algorithm to escape from local optimasolutionsThe evaporation happenswhen there are noimprovements in the quality of the obtained solutions
(7) Precipitation stage is used to reinitialize variables andredistribute WDs in the search space to generate newsolutions
Condensation evaporation and precipitation ((5) (6) and(7)) above distinguish HCA from other water-based algo-rithms including IWD and WCA These characteristics areimportant and help make a proper balance between explo-ration exploitation which helps in convergence towards theglobal solution In addition the stages of the water cycle arecomplementary to each other and help improve the overallperformance of the HCA
38TheHCA Procedure Pseudocodes 1 and 2 show the HCApseudocode
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
10 Journal of Optimization
Better
Figure 7 Relationship between the amount of carrying soil and itsquality
and where coefficient 120573 is determined based on the problemThe difference (Δ119863) is calculated based onΔ119863 = MaxValue minusMinValue (16)
such that
MaxValue = max [Solutions Quality (WDs)] MinValue = min [Solutions Quality (WDs)] (17)
According to (16) increase in temperature will be affected bythe difference between the best solution (MinValue) and theworst solution (MaxValue) Less difference means that therewas not a lot of improvement in the last few iterations and asa result the temperature will increase more This means thatthere is no need to evaporate the water drops as long as theycan find different solutions in each iteration The flow stagewill repeat a few times until the temperature reaches a certainvalue When the temperature is sufficient the evaporationstage begins
34 Evaporation Stage In this stage the number of waterdrops that will evaporate (the evaporation rate) when thetemperature reaches a specific value is first determined Theevaporation rate is chosen randomly between one and thetotal number of water drops
ER = Random (1119873) (18)
where ER is evaporation rate and119873 is number ofwater dropsAccording to the evaporation rate a specific number of waterdrops is selected to evaporateThe selection is done using theroulette-wheel selection method taking into considerationthe solution quality of each water drop Only the selectedwater drops evaporate and go to the next stage
35 Condensation Stage As the temperature decreases waterdrops draw closer to each other and then collide and combineor bounce off These operations are useful as they allowthe water drops to be in direct physical contact with eachother This stage represents the collective and cooperativebehavior between all the water drops A water drop cangenerate a solution without direct communication (by itself)however communication between water drops may lead tothe generation of better solutions in the next cycle In HCAphysical contact is used for direct communication betweenwater drops whereas the amount of soil on the edges canbe considered indirect communication between the water
drops Direct communication (information exchange) hasbeen proven to improve solution quality significantly whenapplied by other algorithms [37]
The condensation stage is considered to be a problem-dependent stage that can be redesigned to fit the problemspecification For example one way of implementing thisstage to fit the Travelling Salesman Problem (TSP) is to uselocal improvement methods Using these methods enhancesthe solution quality and generates better results in the nextcycle (ie minimizes the total cost of the solution) with thehypothesis that it will reduce the number of iterations neededand help to reach the optimalnear-optimal solution fasterEquation (19) shows the evaporatedwater (EW) selected fromthe overall population where 119899 represents the evaporationrate (number of evaporated water drops)
EW = WD1WD2 WD119899 (19)
Initially all the possible combinations (as pairs) are selectedfrom the evaporated water drops to share their informationwith each other via collision When two water drops collidethere is a chance either of the two water drops merging(coalescence) or of them bouncing off each other In naturedetermining which operation will take place is based onfactors such as the temperature and the velocity of each waterdrop In this research we considered the similarity betweenthe water dropsrsquo solutions to determine which process willoccur When the similarity is greater than or equal to 50between two water dropsrsquo solutions they merge Otherwisethey collide and bounce off each other Mathematically thiscan be represented as follows
OP (WD1WD2)= Bounce (WD1WD2) Similarity lt 50
Merge (WD1WD2) Similarity ge 50 (20)
We use the Hamming distance [43] to count the number ofdifferent positions in two series that have the same length toobtain a measure of the similarity between water drops Forexample the Hamming distance between 12468 and 13458 istwoWhen twowater drops collide andmerge onewater drop(ie the collector) will becomemore powerful by eliminatingthe other one in the process also acquiring the characteristicsof the eliminated water drop (ie its velocity)
WD1Vel = WD1Vel +WD2Vel (21)
On the other hand when two water drops collide and bounceoff they will share information with each other about thegoodness of each node and how much a node contributes totheir solutions The bounce-off operation generates informa-tion that is used later to refine the quality of the water dropsrsquosolutions in the next cycle The information is available to allwater drops and helps them to choose a node that has a bettercontribution from all the possible nodes at the flow stageTheinformation consists of the weight of each node The weightmeasures a node occurrence within a solution over the waterdrop solution qualityThe idea behind sharing these details is
Journal of Optimization 11
to emphasize the best nodes found up to that point Withinthis exchange the water drops will favor those nodes in thenext cycle Mathematically the weight can be calculated asfollows
Weight (node) = NodeoccurrenceWDsol
(22)
Finally this stage is used to update the global-best solutionfound up to that point In addition at this stage thetemperature is reduced by 50∘C The lowering and raisingof the temperature helps to prevent the water drops fromsticking with the same solution every iteration
36 Precipitation Stage This stage is considered the termina-tion stage of the cycle as the algorithm has to check whetherthe termination condition is met If the condition has beenmet the algorithm stops with the last global-best solutionOtherwise this stage is responsible for reinitializing all thedynamic variables such as the amount of the soil on eachedge depth of paths the velocity of each water drop and theamount of soil it holds The reinitialization of the parametershelps the algorithm to avoid being trapped in local optimawhich may affect the algorithmrsquos performance in the nextcycle Moreover this stage is considered as a reinforcementstage which is used to place emphasis on the best water drop(the collector) This is achieved by reducing the amount ofsoil on the edges that belong to the best water drop solution
Soil (119894 119895) = 09 lowast soil (119894 119895) forall (119894 119895) isin BestWD (23)
The idea behind that is to favor these edges over the otheredges in the next cycle
37 Summarization of HCA Characteristics TheHCA can bedistinguished from other swarm algorithms in the followingways
(1) HCA involves a number of cycles and number ofiterations (multiple iterations per cycle)This gives theadvantage of performing certain tasks (eg solutionimprovements methods) every cycle instead of everyiteration to reduce the computational effort In addi-tion the cyclic nature of the algorithm contributesto self-organization as the system converges to thesolution through feedback from the environmentand internal self-evaluationThe temperature variablecontrols the cycle and its value is updated accordingto the quality of the solution obtained in everyiteration
(2) The flow of the water drops is controlled by pathsdepth and soil amount heuristics (indirect commu-nication) Paths depth factor affects the movement ofwater drops and helps to avoid choosing the samepaths by all water drops
(3) Water drops are able to gain or lose portion of theirvelocity This idea supports the competition betweenwater drops and prevents the sweep of one drop forthe rest of the drops because a high-velocity water
HCA procedure(i) Problem input(ii) Initialization of parameters(iii)While (termination condition is not met)
Cycle = Cycle + 1 Flow stageRepeat
(a) Iteration = iteration + 1(b) For each water drop
(1) Choose next node (Eqs (5)ndash(8))(2) Update velocity (Eq (9))(3) Update soil and depth (Eqs (10)-(11))(4) Update carrying soil (Eq (13))
(c) End for(d) Calculate the solution fitness(e) Update local optimal solution(f) Update Temperature (Eqs (14)ndash(17))
Until (Temperature = Evaporation Temperature)Evaporation (WDs)Condensation (WDs)Precipitation (WDs)
(iv) End while
Pseudocode 1 HCA pseudocode
drop can carve more paths (ie remove more soils)than slower one
(4) Soil can be deposited and removed which helpsto avoid a premature convergence and stimulatesthe spread of drops in different directions (moreexploration)
(5) Condensation stage is utilized to perform informa-tion sharing (direct communication) among thewaterdrops Moreover selecting a collector water droprepresents an elitism technique and that helps toperform exploitation for the good solutions Furtherthis stage can be used to improve the quality of theobtained solutions to reduce the number of iterations
(6) The evaporation process is a selection technique andhelps the algorithm to escape from local optimasolutionsThe evaporation happenswhen there are noimprovements in the quality of the obtained solutions
(7) Precipitation stage is used to reinitialize variables andredistribute WDs in the search space to generate newsolutions
Condensation evaporation and precipitation ((5) (6) and(7)) above distinguish HCA from other water-based algo-rithms including IWD and WCA These characteristics areimportant and help make a proper balance between explo-ration exploitation which helps in convergence towards theglobal solution In addition the stages of the water cycle arecomplementary to each other and help improve the overallperformance of the HCA
38TheHCA Procedure Pseudocodes 1 and 2 show the HCApseudocode
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 11
to emphasize the best nodes found up to that point Withinthis exchange the water drops will favor those nodes in thenext cycle Mathematically the weight can be calculated asfollows
Weight (node) = NodeoccurrenceWDsol
(22)
Finally this stage is used to update the global-best solutionfound up to that point In addition at this stage thetemperature is reduced by 50∘C The lowering and raisingof the temperature helps to prevent the water drops fromsticking with the same solution every iteration
36 Precipitation Stage This stage is considered the termina-tion stage of the cycle as the algorithm has to check whetherthe termination condition is met If the condition has beenmet the algorithm stops with the last global-best solutionOtherwise this stage is responsible for reinitializing all thedynamic variables such as the amount of the soil on eachedge depth of paths the velocity of each water drop and theamount of soil it holds The reinitialization of the parametershelps the algorithm to avoid being trapped in local optimawhich may affect the algorithmrsquos performance in the nextcycle Moreover this stage is considered as a reinforcementstage which is used to place emphasis on the best water drop(the collector) This is achieved by reducing the amount ofsoil on the edges that belong to the best water drop solution
Soil (119894 119895) = 09 lowast soil (119894 119895) forall (119894 119895) isin BestWD (23)
The idea behind that is to favor these edges over the otheredges in the next cycle
37 Summarization of HCA Characteristics TheHCA can bedistinguished from other swarm algorithms in the followingways
(1) HCA involves a number of cycles and number ofiterations (multiple iterations per cycle)This gives theadvantage of performing certain tasks (eg solutionimprovements methods) every cycle instead of everyiteration to reduce the computational effort In addi-tion the cyclic nature of the algorithm contributesto self-organization as the system converges to thesolution through feedback from the environmentand internal self-evaluationThe temperature variablecontrols the cycle and its value is updated accordingto the quality of the solution obtained in everyiteration
(2) The flow of the water drops is controlled by pathsdepth and soil amount heuristics (indirect commu-nication) Paths depth factor affects the movement ofwater drops and helps to avoid choosing the samepaths by all water drops
(3) Water drops are able to gain or lose portion of theirvelocity This idea supports the competition betweenwater drops and prevents the sweep of one drop forthe rest of the drops because a high-velocity water
HCA procedure(i) Problem input(ii) Initialization of parameters(iii)While (termination condition is not met)
Cycle = Cycle + 1 Flow stageRepeat
(a) Iteration = iteration + 1(b) For each water drop
(1) Choose next node (Eqs (5)ndash(8))(2) Update velocity (Eq (9))(3) Update soil and depth (Eqs (10)-(11))(4) Update carrying soil (Eq (13))
(c) End for(d) Calculate the solution fitness(e) Update local optimal solution(f) Update Temperature (Eqs (14)ndash(17))
Until (Temperature = Evaporation Temperature)Evaporation (WDs)Condensation (WDs)Precipitation (WDs)
(iv) End while
Pseudocode 1 HCA pseudocode
drop can carve more paths (ie remove more soils)than slower one
(4) Soil can be deposited and removed which helpsto avoid a premature convergence and stimulatesthe spread of drops in different directions (moreexploration)
(5) Condensation stage is utilized to perform informa-tion sharing (direct communication) among thewaterdrops Moreover selecting a collector water droprepresents an elitism technique and that helps toperform exploitation for the good solutions Furtherthis stage can be used to improve the quality of theobtained solutions to reduce the number of iterations
(6) The evaporation process is a selection technique andhelps the algorithm to escape from local optimasolutionsThe evaporation happenswhen there are noimprovements in the quality of the obtained solutions
(7) Precipitation stage is used to reinitialize variables andredistribute WDs in the search space to generate newsolutions
Condensation evaporation and precipitation ((5) (6) and(7)) above distinguish HCA from other water-based algo-rithms including IWD and WCA These characteristics areimportant and help make a proper balance between explo-ration exploitation which helps in convergence towards theglobal solution In addition the stages of the water cycle arecomplementary to each other and help improve the overallperformance of the HCA
38TheHCA Procedure Pseudocodes 1 and 2 show the HCApseudocode
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
12 Journal of Optimization
Evaporation procedure (WDs)Evaluate the solution qualityIdentify Evaporation Rate (Eq (18))Select WDs to evaporate
EndCondensation procedure (WDs)
Information exchange (WDs) (Eqs (20)ndash(22))Identify the collector (WD)Update global solutionUpdate the temperature
EndPrecipitation procedure (WDs)
Evaluate the solution qualityReinitialize dynamic parametersGlobal soil update (belong to best WDs) (Eq (23))Generate a newWDs populationDistribute the WDs randomly
End
Pseudocode 2 The Evaporation Condensation and Precipitationpseudocode
39 Similarities and Differences in Comparison to OtherWater-Inspired Algorithms In this section we clarify themajor similarities and differences between the HCA andother algorithms such as WCA and IWD These algorithmsshare the same source of inspirationmdashwater movement innature However they are inspired by different aspects of thewater processes and their corresponding events In WCAentities are represented by a set of streams These streamskeep moving from one point to another which simulates theflow process of the water cycle When a better point is foundthe position of the stream is swapped with that of a river orthe sea according to their fitness The swap process simulatesthe effects of evaporation and rainfall rather than modellingthese processes Moreover these effects only occur when thepositions of the streamsrivers are very close to that of the seaIn contrast theHCAhas a population of artificial water dropsthat keepmoving fromone point to another which representsthe flow stage While moving water drops can carrydropsoil that change the topography of the ground by makingsome paths deeper or fading others This represents theerosion and deposition processes In WCA no considerationis made for soil removal from the paths which is considereda critical operation in the formation of streams and riversIn HCA evaporation occurs after certain iterations whenthe temperature reaches a specific value The temperature isupdated according to the performance of the water dropsIn WCA there is no temperature and the evaporation rateis based on a ratio based on quality of solution Anothermajor difference is that there is no consideration for thecondensation stage inWCA which is one of the crucial stagesin the water cycle By contrast in HCA this stage is utilizedto implement the information sharing concept between thewater drops Finally the parameters operations explorationtechniques the formalization of each stage and solutionconstruction differ in both algorithms
Despite the fact that the IWD algorithm is used withmajor modifications as a subpart of HCA there are major
differences between IWD and HCA In IWD the probabilityof selecting the next node is based on the amount of soilIn contrast in HCA the probability of selecting the nextnode is an association between the soil and the path depthwhich enables the construction of a variety of solutions Thisassociation is useful when the same amount of soil existson different paths therefore the pathsrsquo depths are used todifferentiate between these edges Moreover this associationhelps in creating a balance between the exploitation andexploration of the search process The edge with less depthis chosen and this favors the exploration Alternativelychoosing the edge with less soil will favor exploitationFurther in HCA additional factors such as amount of soilin comparison to depth are considered in velocity updatingwhich allows the water drops to gain or lose velocity Thisgives other water drops a chance to compete as the velocity ofwater drops plays an important role in guiding the algorithmto discover new paths Moreover the soil update equationenables soil removal and deposition The underlying ideais to help the algorithm to improve its exploration avoidpremature convergence and avoid being trapped in localoptima In IWD only indirect communication is consideredwhile direct and indirect communications are considered inHCA Furthermore inHCA the carrying soil is encodedwiththe solution qualitymore carrying soil indicates a better solu-tion Finally in HCA three critical stages (ie evaporationcondensation and precipitation) are considered to improvethe performance of the algorithm Table 1 summarizes themajor differences between IWD WCA and HCA
4 Experimental Setup
In this section we explain how the HCA is applied tocontinuous problems
41 Problem Representation Typically the input of the HCAis represented as a graph in which water drops can travelbetween nodes using the edges In order to apply the HCAover the COPs we developed a directed graph representa-tion that exploits a topographical approach for continuousnumber representation For a function with 119873 variables andprecision 119875 for each variable the graph is composed of (N timesP) layers In each layer there are ten nodes labelled from zeroto nine Therefore there are (10 times N times P) nodes in the graphas shown in Figure 8
This graph can be seen as a topographical mountainwith different layers or contour lines There is no connectionbetween the nodes in the same layer this prevents a waterdrop from selecting two nodes from the same layer Aconnection exists only from one layer to the next lower layerin which a node in a layer is connected to all the nodesin the next layer The value of a node in layer 119894 is higherthan that of a node in layer 119894 + 1 This can be interpreted asthe selected number from the first layer being multiplied by01 The selected number from the second layer is found bymultiplying by 001 and so on This can be done easily using
119883value = 119898sum119871=1
119899 times 10minus119871 (24)
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 13
Table 1 A summary of the main differences between IWD WCA and HCA
Criteria IWD WCA HCAFlow stage Moving water drops Changing streamsrsquo positions Moving water dropsChoosing the next node Based on the soil Based on river and sea positions Based on soil and path depthVelocity Always increases NA Increases and decreasesSoil update Removal NA Removal and deposition
Carrying soil Equal to the amount ofsoil removed from a path NA Is encoded with the solution quality
Evaporation NA When the river position is veryclose to the sea position
Based on the temperature value which isaffected by the percentage of
improvements in the last set of iterations
Evaporation rate NAIf the distance between a riverand the sea is less than the
thresholdRandom number
Condensation NA NA
Enable information sharing (directcommunication) Update the global-bestsolution which is used to identify the
collector
Precipitation NA Randomly distributes a numberof streams
Reinitializes dynamic parameters such assoil velocity and carrying soil
where 119871 is the layer numberm is themaximum layer numberfor each variable (the precision digitrsquos number) and 119899 isthe node in that layer The water drops will generate valuesbetween zero and one for each variable For example ifa water drop moves between nodes 2 7 5 6 2 and 4respectively then the variable value is 0275624 The maindrawback of this representation is that an increase in thenumber of high-precision variables leads to an increase insearch space
42 Variables Domain and Precision As stated above afunction has at least one variable or up to 119899 variables Thevariablesrsquo values should be within the function domainwhich defines a set of possible values for a variable In somefunctions all the variables may have the same domain rangewhereas in others variables may have different ranges Forsimplicity the obtained values by a water drop for eachvariable can be scaled to have values based on the domainof each variable119881value = (119880119861 minus 119871119861) times Value + 119871119861 (25)
where 119881value is the real value of the variable 119880119861 is upperbound and 119871119861 is lower bound For example if the obtainedvalue is 0658752 and the variablersquos domain is [minus10 10] thenthe real value will be equal to 317504 The values of con-tinuous variables are expressed as real numbers Thereforethe precision (119875) of the variablesrsquo values has to be identifiedThis precision identifies the number of digits after the decimalpoint Dealing with real numbers makes the algorithm fasterthan is possible by simply considering a graph of binarynumbers as there is no need to encodedecode the variablesrsquovalues to and from binary values
43 Solution Generator To find a solution for a functiona number of water drops are placed on the peak of this
s
01
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
01
2
3
45
6
7
8
9
01
2
3
45
6
7
8
9 0 1
2
3
4
567
8
9
0 1
2
3
4
56
7
8
9
0
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
0
middot middot middot
middot middot middot
Figure 8 Graph representation of continuous variables
continuous variable mountain (graph) These drops flowdown along the mountain layers A water drop chooses onlyone number (node) from each layer and moves to the nextlayer below This process continues until all the water dropsreach the last layer (ground level) Each water drop maygenerate a different solution during its flow However asthe algorithm iterates some water drops start to convergetowards the best water drop solution by selecting the highestnodersquos probability The candidate solutions for a function canbe represented as a matrix in which each row consists of a
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
14 Journal of Optimization
Table 2 HCA parameters and their values
Parameter ValueNumber of water drops 10Maximum number of iterations 5001000Initial soil on each edge 10000Initial velocity 100
Initial depth Edge lengthsoil on thatedge
Initial carrying soil 1Velocity updating 120572 = 2Soil updating 119875119873 = 099Initial temperature 50 120573 = 10Maximum temperature 100
water drop solution Each water drop will generate a solutionto all the variables of the function Therefore the water dropsolution is composed of a set of visited nodes based on thenumber of variables and the precision of each variable
Solutions =[[[[[[[[[[[
WD1119904 1 2 sdot sdot sdot 119898119873times119875WD2119904 1 2 sdot sdot sdot 119898119873times119875WD119894119904 1 2 sdot sdot sdot 119898119873times119875 sdot sdot sdotWD119896119904 1 2 sdot sdot sdot 119898119873times119875
]]]]]]]]]]] (26)
An initial solution is generated randomly for each waterdrop based on the function dimensionality Then thesesolutions are evaluated and the best combination is selectedThe selected solution is favored with less soil on the edgesbelonging to this solution Subsequently the water drops startflowing and generating solutions
44 Local Mutation Improvement The algorithm can beaugmented with mutation operation to change the solutionof the water drops during collision at the condensation stageThe mutation is applied only on the selected water dropsMoreover the mutation is applied randomly on selectedvariables from a water drop solution For real numbers amutation operation can be implemented as follows119881new = 1 minus (120573 times 119881old) (27)
where 120573 is a uniform randomnumber in the range [0 1]119881newis the new value after the mutation and 119881old is the originalvalue This mutation helps to flip the value of the variable ifthe value is large then it becomes small and vice versa
45 HCAParameters andAssumption In theHCA a numberof parameters are considered to control the flow of thealgorithm and to help to generate a solution Some of theseparameters need to be adjusted according to the problemdomain Table 2 lists the parameters and their values used inthis algorithm after initial experiments
The distance between the nodes in the same layer is notspecified (ie no connection) because a water drop cannotchoose two nodes from the same layer The distance betweenthe nodes in different layers is the same and equal to oneunit Therefore the depth is adjusted to equal the inverse ofthe number of times a node is selected Under this settinga path between (x y) will deepen with increasing number ofselections of node 119910 after node 119909 This adjustment is requiredbecause the internodal distance is constant as stipulated inthe problem representation Hence considering the depth toequal the distance over the soil (see (8)) will not affect theoutcome as supposed
46 Experimental Results and Analysis A number of bench-mark functions were selected from the literature see [4]for a full description of these functions The mathematicalformulations of the used functions are listed in the AppendixThe functions have a variety of properties with different typesIn this paper only unconstrained functions are consideredThe experiments were conducted on a PCwith a Core i5 CPUand 8GBRAMusingMATLABonMicrosoftWindows 7Thecharacteristics of these functions are summarized in Table 3
For all the functions the precision is assumed to be sixsignificant figures for each variable The HCA was executedtwenty times with amaximumnumber of iterations of 500 forall functions except for those with more than three variablesfor which the maximum was 1000 The algorithm was testedwithout mutation (HCA) and with mutation (MHCA) for allthe functions
The results are provided in Table 4 The algorithmnumbers in Table 4 (the column named ldquoNumberrdquo) maps tothe same column in Table 3 For each function the worstaverage and best values are listed The symbol ldquordquo representsthe average number of iterations needed to reach the globalsolutionThe results show that the algorithm gave satisfactoryresults for all of the functions tested and it was able tofind the global minimum solution within a small numberof iterations for some functions In order to evaluate thealgorithm performance we calculated the success rate foreach function using
Success rate = (Number of successful runsTotal number of runs
)times 100 (28)
The solution is accepted if the difference between theobtained solution and the optimum value is less than0001 The algorithm achieved 100 for all of the functionsexcept for Powell-4v [HCA (60) MHCA (90)] Sphere-6v [HCA (60) MHCA (40)] Rosenbrock-4v [HCA(70)MHCA (100)] andWood-4v [HCA (50)MHCA(80)] The numbers highlighted in boldface text in theldquobestrdquo column represent the best value achieved in the caseswhereHCAorMHCAperformedbest in all other casesHCAandMHCAwere found to give the same ldquobestrdquo performance
The results of HCA andMHCAwere compared using theWilcoxon Signed-Rank test The 119875 values obtained in the testwere 02460 01389 08572 and 02543 for the worst averagebest and average number of iterations respectively Accordingto the 119875 values there is no significant difference between the
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 15Ta
ble3Fu
nctio
nsrsquocharacteristics
Num
ber
Functio
nname
Lower
boun
dUpp
erbo
und
119863119891(119909lowast )
Type
(1)
Ackley
minus32768
327682
0Manylocalm
inim
a(2)
Cross-In-Tray
minus1010
2minus2062
61Manylocalm
inim
a(3)
Drop-Wave
minus512512
2minus1
Manylocalm
inim
a(4)
GramacyampLee(2012)
0525
1minus0869
01Manylocalm
inim
a(5)
Grie
wank
minus600600
20
Manylocalm
inim
a(6)
HolderT
able
minus1010
2minus1920
85Manylocalm
inim
a(7)
Levy
minus1010
20
Manylocalm
inim
a(8)
Levy
N13
minus1010
20
Manylocalm
inim
a(9)
Rastrig
inminus512
5122
0Manylocalm
inim
a(10)
SchafferN
2minus100
1002
0Manylocalm
inim
a(11)
SchafferN
4minus100
1002
0292579
Manylocalm
inim
a(12)
Schw
efel
minus500500
20
Manylocalm
inim
a(13)
Shub
ert
minus512512
2minus1867
309Manylocalm
inim
a(14
)Bo
hachevsky
minus100100
20
Bowl-shaped
(15)
RotatedHyper-Ellipsoid
minus65536
655362
0Bo
wl-shaped
(16)
Sphere
(Hyper)
minus512512
20
Bowl-shaped
(17)
Sum
Squares
minus1010
20
Bowl-shaped
(18)
Booth
minus1010
20
Plate-shaped
(19)
Matyas
minus1010
20
Plate-shaped
(20)
McC
ormick
[minus154]
[minus34]2
minus19133
Plate-shaped
(21)
Zakh
arov
minus510
20
Plate-shaped
(22)
Three-Hum
pCa
mel
minus55
20
Valley-shaped
(23)
Six-Hum
pCa
mel
[minus33][minus22]
2minus1031
6Va
lley-shaped
(24)
Dixon
-Pric
eminus10
102
0Va
lley-shaped
(25)
Rosenb
rock
minus510
20
Valley-shaped
(26)
ShekelrsquosF
oxho
les(D
eJon
gN5)
minus65536
655362
099800
Steeprid
gesd
rops
(27)
Easom
minus100100
2minus1
Steeprid
gesd
rops
(28)
Michalewicz
0314
2minus1801
3Steeprid
gesd
rops
(29)
Beale
minus4545
20
Other
(30)
Branin
[minus510]
[015]2
039788
Other
(31)
Goldstein-Pric
eIminus2
22
3Other
(32)
Goldstein-Pric
eII
minus5minus5
21
Other
(33)
Styblin
ski-T
ang
minus5minus5
2minus7833
198Other
(34)
GramacyampLee(2008)
minus26
2minus0428
8819Other
(35)
Martin
ampGaddy
010
20
Other
(36)
Easto
nandFenton
010
217441
52Other
(37)
Rosenb
rock
D12
minus1212
20
Other
(38)
Sphere
minus512512
30
Other
(39)
Hartm
ann3-D
01
3minus3862
78Other
(40)
Rosenb
rock
minus1212
40
Other
(41)
Powell
minus45
40
Other
(42)
Woo
dminus5
54
0Other
(43)
Sphere
minus512512
60
Other
(44)
DeJon
gminus2048
20482
390593
Maxim
izefun
ction
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
16 Journal of OptimizationTa
ble4Th
eobtainedresults
with
nomutation(H
CA)a
ndwith
mutation(M
HCA
)
Num
ber
HCA
MHCA
Worst
Average
Best
Worst
Average
Best
(1)
888119864minus16
888119864minus16
888119864minus16
2202888119864minus
16888119864minus
16888119864minus
1627935
(2)
minus206119864+00
minus206119864+00
minus206119864+00
362minus206119864
+00minus206119864
+00minus206119864
+001961
(3)
minus100119864+00
minus100119864+00
minus100119864+00
3308minus100119864
+00minus100119864
+00minus100119864
+002204
(4)
minus869119864minus01
minus869119864minus01
minus869119864minus01
29765minus869119864
minus01minus869119864
minus01minus869119864
minus0134595
(5)
000119864+00
000119864+00
000119864+00
21225000119864+
00000119864+
00000119864+
002848
(6)
minus192119864+01
minus192119864+01
minus192119864+01
3217minus192119864
+01minus192119864
+01minus192119864
+0130275
(7)
423119864minus07
131119864minus07
150119864minus32
3995360119864minus
07865119864minus
08150119864minus
323948
(8)
163119864minus05
470119864minus06
135Eminus31
39945997119864minus
06306119864minus
06160119864minus
093663
(9)
000119864+00
000119864+00
000119864+00
2894000119864+
00000119864+
00000119864+
003529
(10)
000119864+00
000119864+00
000119864+00
2841000119864+
00000119864+
00000119864+
0033385
(11)
293119864minus01
293119864minus01
293119864minus01
3586293119864minus
01293119864minus
01293119864minus
0133625
(12)
682119864minus04
419119864minus04
208119864minus04
3376128119864minus
03642119864minus
04202Eminus
0432375
(13)
minus187119864+02
minus187119864+02
minus187119864+02
368minus187119864
+02minus187119864
+02minus187119864
+0239145
(14)
000119864+00
000119864+00
000119864+00
2649000119864+
00000119864+
00000119864+
0028825
(15)
000119864+00
000119864+00
000119864+00
3099000119864+
00000119864+
00000119864+
00291
(16)
000119864+00
000119864+00
000119864+00
28315000119864+
00000119864+
00000119864+
002936
(17)
000119864+00
000119864+00
000119864+00
30595000119864+
00000119864+
00000119864+
002716
(18)
203119864minus05
633119864minus06
000E+00
4335197119864minus
05578119864minus
06296119864minus
083635
(19)
000119864+00
000119864+00
000119864+00
25835000119864+
00000119864+
00000119864+
0028755
(20)
minus191119864+00
minus191119864+00
minus191119864+00
28265minus191119864
+00minus191119864
+00minus191119864
+002258
(21)
112119864minus04
507119864minus05
473119864minus06
32815394119864minus
04130119864minus
04428Eminus
0724205
(22)
000119864+00
000119864+00
000119864+00
2388000119864+
00000119864+
00000119864+
002889
(23)
minus103119864+00
minus103119864+00
minus103119864+00
34815minus103119864
+00minus103119864
+00minus103119864
+003324
(24)
308119864minus04
969119864minus05
389119864minus06
39425546119864minus
04267119864minus
04410Eminus
0838055
(25)
203119864minus09
214119864minus10
000119864+00
35055225119864minus
10321119864minus
11000119864+
0028255
(26)
998119864minus01
998119864minus01
998119864minus01
3546998119864minus
01998119864minus
01998119864minus
0137035
(27)
minus997119864minus01
minus998119864minus01
minus100119864+00
35415minus997119864
minus01minus999119864
minus01minus100119864
+003446
(28)
minus180119864+00
minus180119864+00
minus180119864+00
428minus180119864
+00minus180119864
+00minus180119864
+003981
(29)
925119864minus05
525119864minus05
341Eminus06
3799133119864minus
04737119864minus
05256119864minus
0539375
(30)
398119864minus01
398119864minus01
398119864minus01
3458398119864minus
01398119864minus
01398119864minus
014099
(31)
300119864+00
300119864+00
300119864+00
2555300119864+
00300119864+
00300119864+
003108
(32)
100119864+00
100119864+00
100119864+00
4107100119864+
00100119864+
00100119864+
0037995
(33)
minus783119864+01
minus783119864+01
minus783119864+01
351minus783119864
+01minus783119864
+01minus783119864
+01341
(34)
minus429119864minus01
minus429119864minus01
minus429119864minus01
26205minus429119864
minus01minus429119864
minus01minus429119864
minus0128665
(35)
000119864+00
000119864+00
000119864+00
3709000119864+
00000119864+
00000119864+
003471
(36)
174119864+00
174119864+00
174119864+00
38575174119864+
00174119864+
00174119864+
004088
(37)
922119864minus06
367119864minus06
663Eminus08
3822466119864minus
06260119864minus
06279119864minus
073516
(38)
443119864minus07
827119864minus08
000119864+00
3713213119864minus
08450119864minus
09000119864+
0033045
(39)
minus386119864+00
minus386119864+00
minus386119864+00
39205minus386119864
+00minus386119864
+00minus386119864
+003275
(40)
194119864minus01
702119864minus02
164Eminus03
8165875119864minus
02304119864minus
02279119864minus
03806
(41)
242119864minus01
913119864minus02
391119864minus03
86395112119864minus
01426119864minus
02196Eminus
0384085
(42)
112119864+00
291119864minus01
762119864minus08
75395267119864minus
01610119864minus
02100Eminus
086944
(43)
105119864+00
388119864minus01
147Eminus05
879896119864minus
01310119864minus
01651119864minus
035052
(44)
391119864+03
391119864+03
391119864+03
3148
391119864+03
391119864+03
391119864+03
37385Mean
3784536130
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 17
Table 5 Average execution time per number of variables
Hypersphere function 2 variables(500 iterations)
3 variables(500 iterations)
4 variables(1000 iterations)
6 variables(1000 iterations)
Average execution time(second) 28186 40832 10716 15962
resultsThis indicates that theHCAalgorithm is able to obtainoptimal results without the mutation operation Howeverit should be noted that using the mutation operation hadthe advantage of reducing the average number of iterationsrequired to converge on a solution by 61
The success rate was lower for functions with more thanfour variables because of the problem representation wherethe number of layers in the representation increases with thenumber of variables Consequently the HCA performancewas limited to functions with few variablesThe experimentalresults revealed a notable advantage of the proposed problemrepresentation namely the small effect of the functiondomain on the solution quality and algorithm performanceIn contrast the performances of some algorithms are highlyaffected by the function domain because their workingmechanism depends on the distribution of the entities in amultidimensional search space If an entity wanders outsidethe function boundaries its position must be reset by thealgorithm
The effectiveness of the algorithm is validated by ana-lyzing the calculated average values for each function (iethe average value of each function is close to the optimalvalue) This demonstrates the stability of the algorithmsince it manages to find the global-optimal solution in eachexecution Since the maximum number of iterations is fixedto a specific value the average execution time was recordedfor Hypersphere function with different number of variablesConsequently the execution time is approximately the samefor the other functions with the same number of variablesThere is a slight increase in execution time with increasingnumber of variables The results are reported in Table 5
Based on the literature the most commonly used criteriafor evaluating algorithms are number of iterations (functionevaluations) success rate and solution quality (accuracy)In solving continuous problems the performance of analgorithm can be evaluated using the number of iterationsrather than execution time or solution accuracy The aim ofevaluating the number of iterations is to infer the convergencespeed of the entities of an algorithm towards the global-optimal solution Another aim is to remove hardware aspectsfrom consideration Generally speaking premature or slowconvergence is not preferred because it may lead to trappingin local optima solutions
The performance of the HCA was also compared withthat of the WCA on specific functions where the WCAresults were taken from [29] The obtained results for bothalgorithms are displayed inTable 6The symbol ldquordquo representsthe average number of iterations
The results presented in Table 6 show thatHCA andWCAproduced optimal results with differing accuracies Signifi-cance values of 075 (worst) 097 (average) and 086 (best)
were found using Wilcoxon Signed Ranks Test indicatingno significant difference in performance between WCA andHCA
In addition the average number of iterations is alsocompared and the 119875 value (003078) indicates a significantdifference which confirms that theHCAwas able to reach theglobal-optimal solution with fewer iterations than WCA (in10 of 15 cases) However the solutionsrsquo accuracy of the HCAover functions with more than two variables was lower thanWCA
HCA is also compared with other optimization algo-rithms in terms of average number of iterations The com-pared algorithms were continuous ACO based on DiscreteEncoding CACO-DE [44] Ant Colony System (ANTS) GABA and GEMmdashusing results from [25] WCA and ER-WCA were also comparedmdashwith results taken from [29]The success rate was 100 for all the algorithms includingHCA except for Sphere-6v [HCA (60)] and Rosenbrock-4v [HCA (70)] Table 7 shows the comparison results
As can be seen in Table 7 the HCA results are very com-petitiveWe appliedWilcoxon test to identify the significanceof differences between the results We also applied T-test toobtain 119875 values along with the W-values The results of thePW-values are reported in Table 8
Based onTable 8 there are significant differences betweenthe results of ANTS GA BA and HCA where HCA reachedthe optimal solutions in fewer iterations than the ANTSGA and BA Although the 119875 value for ldquoBA versus HCArdquoindicates that there is no significant difference the W-value shows there is a significant difference between the twoalgorithms Further the performance of HCA CACO-DEGEM WCA and ER-WCA is very competitive Overall theresults also indicate that HCA is highly efficient for functionoptimization
HCA MHCA Continuous GA (CGA) and EnhancedContinuous Tabu Search (ECTS) were also compared onsome other functions in terms of average number of iterationsrequired to reach an optimal solution The results for CGAand ECTS were taken from [16] The paper reported onlythe average number of iterations and the success rate of theiralgorithms The success rate was 100 for all the algorithmsThe results are listed in Table 9 where numbers in boldfaceindicate the lowest value
From Table 9 it can be noted that both HCA andMHCAachieved optimal solutions in fewer iterations than the CGAThis is confirmed using Wilcoxon test and T-test on theobtained results see Table 10
Despite there being no significant difference between theresults of HCA MHCA and ECTS the means of HCA andMHCA results are less than ECTS indicating competitiveperformance
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
18 Journal of Optimization
Table6Com
paris
onbetweenWCA
andHCA
interm
sofresultsanditeratio
ns
Functio
nname
DBo
ndBe
stresult
WCA
HCA
Worst
Avg
Best
Worst
Avg
Best
DeJon
g2
plusmn204839059
339061
2139059
4039059
3684
390593
390593
390593
3148
Goldstein
ampPriceI
2plusmn2
330009
6830005
6130000
2980
300119864+00
300119864+00
300E+00
3458
Branin
2plusmn2
0397703987
1703982
7203977
31377
398119864minus01
398119864minus01
398119864minus01
3799Martin
ampGaddy
2[010]
0929119864minus
04416119864minus
04501119864minus
0657
000119864+00
000119864+00
000E+00
2621Ro
senb
rock
2plusmn12
0146119864minus
02134119864minus
03281119864minus
05174
922119864minus06
367119864minus06
663Eminus08
3709Ro
senb
rock
2plusmn10
0986119864minus
04432119864minus
04112119864minus
06623
203119864minus09
214119864minus10
000E+00
3506
Rosenb
rock
4plusmn12
0798119864minus
04212119864minus
04323Eminus
07266
194119864minus01
702119864minus02
164119864minus03
8165Hypersphere
6plusmn512
0922119864minus
03601119864minus
03134Eminus
07101
105119864+00
388119864minus01
147119864minus05
879Shaffer
2plusmn100
0971119864minus
03116119864minus
03261119864minus
058942
000119864+00
000119864+00
000E+00
2841
Goldstein
ampPriceI
2plusmn5
33
300003
32400
300119864+00
300119864+00
300119864+00
3458
Goldstein
ampPriceII
2plusmn5
111291
101181
47500100119864+
00100119864+
00100119864+
002555
Six-Hum
pCa
meb
ack
2plusmn10
minus10316
minus10316
minus10316
minus10316
3105minus103119864
+00minus103119864
+00minus103119864
+003482
Easto
nampFenton
2[010]
17417441
1744117441
650174119864+
00174119864+
00174119864+
003856
Woo
d4
plusmn50
381119864minus05
158119864minus06
130Eminus10
15650112119864+
00291119864minus
01762119864minus
08754
Powellquartic
4plusmn5
0287119864minus
09609119864minus
10112Eminus
1123500
242119864minus01
913119864minus02
391119864minus03
864
Mean
70006
4638
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 19
Table7Com
paris
onbetweenHCA
andothera
lgorith
msinterm
sofaverage
numbero
fiteratio
ns
Functio
nname
DB
CACO
-DE
ANTS
GA
BAGEM
WCA
ER-W
CAHCA
(1)
DeJon
g2
plusmn20481872
6000
1016
0868
746
6841220
3148
(2)
Goldstein
ampPriceI
2plusmn2
666
5330
5662
999
701
980480
3458
(3)
Branin
2plusmn2
-1936
7325
1657
689
377160
3799(4)
Martin
ampGaddy
2[010]
340
1688
2488
526
258
57100
26205(5a)
Rosenb
rock
2plusmn12
-6842
10212
631
572
17473
3709(5b)
Rosenb
rock
2plusmn10
1313
7505
-2306
2289
62394
35055(6)
Rosenb
rock
4plusmn12
624
8471
-2852
98218
8266
3008165
(7)
Hypersphere
6plusmn512
270
22050
15468
7113
423
10191
879(8)
Shaffer
2-
--
--
89421110
2841
Mean
8475
74778
85525
53286
109833
13560
4031
4448
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
20 Journal of Optimization
Table 8 Comparison between 119875119882-values for HCA and other algorithms (based on Table 7)
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significantat119875 le 005119875 value 119885-value 119882-value
(at critical value of 119908)CACO-DE versus HCA 0324033 minus09435 6 (at 0) NoANTS versus HCA 0014953 minus25205 0 (at 3) YesGA versus HCA 0005598 minus22014 0 (at 0) YesBA versus HCA 0188434 minus25205 0 (at 3) YesGEM versus HCA 0333414 minus15403 7 (at 3) NoWCA versus HCA 0379505 minus02962 20 (at 5) NoER-WCA versus HCA 0832326 minus05331 18 (at 5) No
Table 9 Comparison of CGA ECTS HCA and MHCA
Number Function name D CGA ECTS HCA MHCA(1) Shubert 2 575 - 368 39145(2) Bohachevsky 2 370 - 2649 28825(3) Zakharov 2 620 195 32815 24205(4) Rosenbrock 2 960 480 35055 28255(5) Easom 2 1504 1284 3546 37035(6) Branin 2 620 245 3799 39375(7) Goldstein-Price 1 2 410 231 3458 4099(8) Hartmann 3 582 548 39205 3275(9) Sphere 3 750 338 3713 33045
Mean 7101 4744 3506 3374
Table 10 Comparison between PW-values for CGA ECTS HCA and MHCA
Algorithm versus HCA Using 119879-test Using Wilcoxon test Is the result significant at119875 le 005119875 value 119885-value 119882-value(at critical value of 119908)
CGA versus HCA 0012635 minus26656 0 (at 5) YesCGA versus MHCA 0012361 minus26656 0 (at 5) YesECTS versus HCA 0456634 minus03381 12 (at 2) NoECTS versus MHCA 0369119 minus08452 9 (at 2) NoHCA versus MHCA 0470531 minus07701 16 (at 5) No
Figure 9 illustrates the convergence of global and localsolutions for the Ackley function according to the number ofiterations The red line ldquoLocalrdquo represents the quality of thesolution that was obtained at the end of each iteration Thisshows that the algorithm was able to avoid stagnation andkeep generating different solutions in each iteration reflectingthe proper design of the flow stage We postulate that suchsolution diversity was maintained by the depth factor andfluctuations of thewater drop velocitiesTheHCAminimizedthe Ackley function after 383 iterations
Figure 10 shows a 2D graph for Easom function Thefunction has two variables and the global minimum value isequal to minus1 when (1198831 = 120587 1198832 = 120587) Solving this functionis difficult as the flatness of the landscape does not give anyindication of the direction towards global minimum
Figure 11 shows the convergence of the HCA solvingEasom function
As can be seen the HCA kept generating differentsolutions on each iteration while converging towards the
global minimum value Figure 12 shows the convergence ofthe HCA solving Rosenbrock function
Figure 13 illustrates the convergence speed for HCAon the Hypersphere function with six variables The HCAminimized the function after 919 iterations
As shown in Figure 13 the HCA required more iterationsto minimize functions with more than two variables becauseof the increase in the solution search space
5 Conclusion and Future Work
In this paper a new nature-inspired algorithm called theHydrological Cycle Algorithm (HCA) is presented In HCAa collection of water drops pass through different phases suchas flow (runoff) evaporation condensation and precipita-tion to generate a solution The HCA has been shown tosolve continuous optimization problems and provide high-quality solutions In addition its ability to escape localoptima and find the global optima has been demonstrated
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 21
0 50 100 150 200 250 300 350 400 450 500Number of iterations
02468
1012141618
Func
tion
valu
e
Ackley function convergence at 383
Global bestLocal best
Figure 9 The global and local convergence of the HCA versusiterations number
x2x1
f(x1
x2)
04
02
0
minus02
minus04
minus06
minus08
minus120
100
minus10minus20
2010
0minus10
minus20
Figure 10 Easom function graph (from [4])
which validates the convergence and the effectiveness of theexploration and exploitation process of the algorithm One ofthe reasons for this improved performance is that both directand indirect communication take place to share informationamong the water drops The proposed representation of thecontinuous problems can easily be adapted to other problemsFor instance approaches involving gravity can regard therepresentation as a topology with swarm particles startingat the highest point Approaches involving placing weightson graph vertices can regard the representation as a form ofoptimized route finding
The results of the benchmarking experiments show thatHCA provides results in some cases better and in othersnot significantly worse than other optimization algorithmsBut there is room for further improvement Further workis required to optimize the algorithmrsquos performance whendealing with problems involving two or more variables Interms of solution accuracy the algorithm implementationand the problem representation could be improved includingdynamic fine-tuning of the parameters Possible furtherimprovements to the HCA could include other techniques todynamically control evaporation and condensation Studying
0 50 100 150 200 250 300 350 400 450 500Number of iterations
minus1minus09minus08minus07minus06minus05minus04minus03minus02minus01
0
Func
tion
valu
e
Easom function convergence at 480
Global bestLocal best
Figure 11 The global and local convergence of the HCA on Easomfunction
100 150 200 250 300 350 400 450 500Number of iterations
0
5
10
15
20
25
30
35Fu
nctio
n va
lue
Rosenbrock function convergence at 475
0 50
Global bestLocal best
Figure 12 The global and local convergence of the HCA onRosenbrock function
the impact of the number of water drops on the quality of thesolution is also worth investigating
In conclusion if a natural computing approach is tobe considered a novel addition to the already large field ofoverlapping methods and techniques it needs to embed thetwo critical concepts of self-organization and emergentismat its core with intuitively based (ie nature-based) infor-mation sharing mechanisms for effective exploration andexploitation as well as demonstrate performance which is atleast on par with related algorithms in the field In particularit should be shown to offer something a bit different fromwhat is available alreadyWe contend that HCA satisfies thesecriteria and while further development is required to testits full potential can be used by researchers dealing withcontinuous optimization problems with confidence
Appendix
Table 11 shows the mathematical formulations for the usedtest functions which have been taken from [4 24]
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
22 Journal of Optimization
Table 11 The mathematical formulations
Function name Mathematical formulations
Ackley 119891 (119909) = minus119886 exp(minus119887radic 1119889 119889sum119894=11199092119894) minus exp(1119889 119889sum119894=1cos (119888119909119894)) + 119886 + exp (1) 119886 = 20 119887 = 02 and 119888 = 2120587Cross-In-Tray 119891 (119909) = minus00001(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin (1199091) sin (1199092) exp(1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816100 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816 + 1)01Drop-Wave 119891 (119909) = minus1 + cos(12radic11990921 + 11990922)05 (11990921 + 11990922) + 2Gramacy amp Lee (2012) 119891 (119909) = sin (10120587119909)2119909 + (119909 minus 1)4Griewank 119891 (119909) = 119889sum
119894=1
11990921198944000 minus 119889prod119894=1
cos( 119909119894radic119894) + 1Holder Table 119891 (119909) = minus 10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816sin(1199091) cos (1199092) exp(10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161 minus radic11990921 + 11990922120587 1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816)
10038161003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816Levy 119891 (119909) = sin2 (31205871199081) + 119889minus1sum
119894=1
(119908119894 minus 1)2 [1 + 10 sin2 (120587119908119894 + 1)] + (119908119889 minus 1)2 [1 + sin2 (2120587119908119889)]where 119908119894 = 1 + 119909119894 minus 14 forall119894 = 1 119889
Levy N 13 119891(119909) = sin2(31205871199091) + (1199091 minus 1)2[1 + sin2(31205871199092)] + (1199092 minus 1)2[1 + sin2(31205871199092)]Rastrigin 119891 (119909) = 10119889 + 119889sum
119894=1
[1199092119894 minus 10 cos (2120587119909119894)]Schaffer N 2 119891 (119909) = 05 + sin2 (11990921 + 11990922) minus 05[1 + 0001 (11990921 + 11990922)]2Schaffer N 4 119891 (119909) = 05 + cos (sin (100381610038161003816100381611990921 minus 119909221003816100381610038161003816)) minus 05[1 + 0001 (11990921 + 11990922)]2Schwefel 119891 (119909) = 4189829119889 minus 119889sum
119894=1
119909119894 sin(radic10038161003816100381610038161199091198941003816100381610038161003816)Shubert 119891 (119909) = ( 5sum
119894=1
119894 cos ((119894 + 1) 1199091 + 119894))( 5sum119894=1
119894 cos ((119894 + 1) 1199092 + 119894))Bohachevsky 119891 (119909) = 11990921 + 211990922 minus 03 cos (31205871199091) minus 04 cos (41205871199092) + 07RotatedHyper-Ellipsoid
119891 (119909) = 119889sum119894=1
119894sum119895=1
1199092119895Sphere (Hyper) 119891 (119909) = 119889sum
119894=1
1199092119894Sum Squares 119891 (119909) = 119889sum
119894=1
1198941199092119894Booth 119891 (119909) = (1199091 + 21199092 minus 7)2 minus (21199091 + 1199092 minus 5)2Matyas 119891 (119909) = 026 (11990921 + 11990922) minus 04811990911199092McCormick 119891 (119909) = sin (1199091 + 1199092) + (1199091 + 1199092)2 minus 151199091 + 251199092 + 1Zakharov 119891 (119909) = 119889sum
119894=1
1199092119894 + ( 119889sum119894=1
05119894119909119894)2 + ( 119889sum119894=1
05119894119909119894)4Three-Hump Camel 119891 (119909) = 211990921 minus 10511990941 + 119909616 + 11990911199092 + 11990922
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 23
Table 11 Continued
Function name Mathematical formulations
Six-Hump Camel 119891 (119909) = (4 minus 2111990921 + 119909413 )11990921 + 11990911199092 + (minus4 + 411990922) 11990922Dixon-Price 119891 (119909) = (1199091 minus 1)2 + 119889sum
119894=1
119894 (21199092119894 minus 119909119894minus1)2Rosenbrock 119891 (119909) = 119889minus1sum
119894=1
[100 (119909119894+1 minus 1199092119894 )2 + (119909119894 minus 1)2]Shekelrsquos Foxholes (DeJong N 5)
119891 (119909) = (0002 + 25sum119894=1
1119894 + (1199091 minus 1198861119894)6 + (1199092 minus 1198862119894)6)minus1
where 119886 = (minus32 minus16 0 16 32 minus32 sdot sdot sdot 0 16 32minus32 minus32 minus32 minus32 minus32 minus16 sdot sdot sdot 32 32 32)Easom 119891 (119909) = minus cos (1199091) cos (1199092) exp (minus (1199091 minus 120587)2 minus (1199092 minus 120587)2)Michalewicz 119891 (119909) = minus 119889sum
119894=1
sin (119909119894) sin2119898 (1198941199092119894120587 ) where 119898 = 10Beale 119891 (119909) = (15 minus 1199091 + 11990911199092)2 + (225 minus 1199091 + 119909111990922)2 + (2625 minus 1199091 + 119909111990932)2Branin 119891 (119909) = 119886 (1199092 minus 11988711990921 + 1198881199091 minus 119903)2 + 119904 (1 minus 119905) cos (1199091) + 119904119886 = 1 119887 = 51(41205872) 119888 = 5120587 119903 = 6 119904 = 10 and 119905 = 18120587Goldstein-Price I 119891 (119909) = [1 + (1199091 + 1199092 + 1)2 (19 minus 141199091 + 311990921 minus 141199092 + 611990911199092 + 311990922)]times [30 + (21199091 minus 31199092)2 (18 minus 321199091 + 1211990921 + 481199092 minus 3611990911199092 + 2711990922)]Goldstein-Price II 119891 (119909) = exp 12 (11990921 + 11990922 minus 25)2 + sin4 (41199091 minus 31199092) + 12 (21199091 + 1199092 minus 10)2Styblinski-Tang 119891 (119909) = 12 119889sum119894=1 (1199094119894 minus 161199092119894 + 5119909119894)Gramacy amp Lee(2008) 119891 (119909) = 1199091 exp (minus11990921 minus 11990922)Martin amp Gaddy 119891 (119909) = (1199091 minus 1199092)2 + ((1199091 + 1199092 minus 10)3 )2Easton and Fenton 119891 (119909) = (12 + 11990921 + 1 + 1199092211990921 + 1199092111990922 + 100(11990911199092)4 ) ( 110)Hartmann 3-D
119891 (119909) = minus 4sum119894=1
120572119894 exp(minus 3sum119895=1
119860 119894119895 (119909119895 minus 119901119894119895)2) where 120572 = (10 12 30 32)119879 119860 = (
(30 10 3001 10 3530 10 3001 10 35
))
119875 = 10minus4((
3689 1170 26734699 4387 74701091 8732 5547381 5743 8828))
Powell 119891 (119909) = 1198894sum119894=1
[(1199094119894minus3 + 101199094119894minus2)2 + 5 (1199094119894minus1 minus 1199094119894)2 + (1199094119894minus2 minus 21199094119894minus1)4 + 10 (1199094119894minus3 minus 1199094119894)4]Wood 119891 (1199091 1199092 1199093 1199094) = 100 (1199091 minus 1199092)2 + (1199091 minus 1)2 + (1199093 minus 1)2 + 90 (11990923 minus 1199094)2+ 101 ((1199092 minus 1)2 + (1199094 minus 1)2) + 198 (1199092 minus 1) (1199094 minus 1)DeJong max 119891 (119909) = (390593) minus 100 (11990921 minus 1199092)2 minus (1 minus 1199091)2
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
24 Journal of Optimization
0 100 200 300 400 500 600 700 800 900 1000Iteration number
0
5
10
15
20
25
30
35
Func
tion
valu
e
Sphere (6v) function convergence at 919
Global-best solution
Figure 13 Convergence of HCA on the Hypersphere function withsix variables
Conflicts of Interest
The authors declare that they have no conflicts of interest
References
[1] F Rothlauf ldquoOptimization problemsrdquo in Design of ModernHeuristics Principles andApplication pp 7ndash44 Springer BerlinGermany 2011
[2] E K P Chong and S H Zak An Introduction to Optimizationvol 76 John ampWiley Sons 2013
[3] M Jamil and X-S Yang ldquoA literature survey of benchmarkfunctions for global optimisation problemsrdquo International Jour-nal ofMathematicalModelling andNumerical Optimisation vol4 no 2 pp 150ndash194 2013
[4] S Surjanovic and D Bingham Virtual library of simulationexperiments test functions and datasets Simon Fraser Univer-sity 2013 httpwwwsfucasimssurjanoindexhtml
[5] D HWolpert andWGMacready ldquoNo free lunch theorems foroptimizationrdquo IEEE Transactions on Evolutionary Computationvol 1 no 1 pp 67ndash82 1997
[6] M Mathur S B Karale S Priye V K Jayaraman and BD Kulkarni ldquoAnt colony approach to continuous functionoptimizationrdquo Industrial ampEngineering Chemistry Research vol39 no 10 pp 3814ndash3822 2000
[7] K Socha and M Dorigo ldquoAnt colony optimization for contin-uous domainsrdquo European Journal of Operational Research vol185 no 3 pp 1155ndash1173 2008
[8] G Bilchev and I C Parmee ldquoThe ant colony metaphor forsearching continuous design spacesrdquo Lecture Notes in ComputerScience (including subseries LectureNotes inArtificial Intelligenceand LectureNotes in Bioinformatics) Preface vol 993 pp 25ndash391995
[9] N Monmarche G Venturini and M Slimane ldquoOn howPachycondyla apicalis ants suggest a new search algorithmrdquoFuture Generation Computer Systems vol 16 no 8 pp 937ndash9462000
[10] J Dreo and P Siarry ldquoContinuous interacting ant colony algo-rithm based on dense heterarchyrdquo Future Generation ComputerSystems vol 20 no 5 pp 841ndash856 2004
[11] L Liu Y Dai and J Gao ldquoAnt colony optimization algorithmfor continuous domains based on position distribution modelof ant colony foragingrdquo The Scientific World Journal vol 2014Article ID 428539 9 pages 2014
[12] V K Ojha A Abraham and V Snasel ldquoACO for continuousfunction optimization A performance analysisrdquo in Proceedingsof the 2014 14th International Conference on Intelligent SystemsDesign andApplications ISDA 2014 pp 145ndash150 Japan Novem-ber 2014
[13] J H HollandAdaptation in Natural and Artificial Systems MITPress Cambridge Mass USA 1992
[14] M Mitchell An Introduction to Genetic Algorithms MIT PressCambridge MA USA 1998
[15] H Maaranen K Miettinen and A Penttinen ldquoOn initialpopulations of a genetic algorithm for continuous optimizationproblemsrdquo Journal of Global Optimization vol 37 no 3 pp405ndash436 2007
[16] R Chelouah and P Siarry ldquoContinuous genetic algorithmdesigned for the global optimization of multimodal functionsrdquoJournal of Heuristics vol 6 no 2 pp 191ndash213 2000
[17] Y-T Kao and E Zahara ldquoA hybrid genetic algorithm andparticle swarm optimization formultimodal functionsrdquoAppliedSoft Computing vol 8 no 2 pp 849ndash857 2008
[18] K James and E Russell ldquoParticle swarm optimizationrdquo inProceedings of the 1995 IEEE International Conference on NeuralNetworks pp 1942ndash1948 IEEE Perth WA Australia 1942
[19] W Chen J Zhang Y Lin et al ldquoParticle swarm optimizationwith an aging leader and challengersrdquo IEEE Transactions onEvolutionary Computation vol 17 no 2 pp 241ndash258 2013
[20] J F Schutte and A A Groenwold ldquoA study of global optimiza-tion using particle swarmsrdquo Journal of Global Optimization vol31 no 1 pp 93ndash108 2005
[21] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[22] J Vesterstrom and R Thomsen ldquoA comparative study of differ-ential evolution particle swarm optimization and evolutionaryalgorithms on numerical benchmark problemsrdquo in Proceedingsof the 2004 Congress on Evolutionary Computation (IEEE CatNo04TH8753) vol 2 pp 1980ndash1987 IEEE Portland OR USA2004
[23] S C Esquivel andC A C Coello ldquoOn the use of particle swarmoptimization with multimodal functionsrdquo in Proceedings of theCongress on Evolutionary Computation (CEC rsquo03) pp 1130ndash1136IEEE Canberra Australia December 2003
[24] D T Pham A Ghanbarzadeh E Koc S Otri S Rahim andM Zaidi ldquoThe bees algorithmmdasha novel tool for complex opti-misationrdquo in Proceedings of the Intelligent Production Machinesand Systems-2nd I PROMS Virtual International ConferenceElsevier July 2006
[25] A Ahrari and A A Atai ldquoGrenade ExplosionMethodmdasha noveltool for optimization of multimodal functionsrdquo Applied SoftComputing vol 10 no 4 pp 1132ndash1140 2010
[26] H Shah-Hosseini ldquoProblem solving by intelligent water dropsrdquoin Proceedings of the 2007 IEEE Congress on EvolutionaryComputation CEC 2007 pp 3226ndash3231 sgp September 2007
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 25
[27] H Shah-Hosseini ldquoAn approach to continuous optimizationby the intelligent water drops algorithmrdquo in Proceedings of the4th International Conference of Cognitive Science ICCS 2011 pp224ndash229 Iran May 2011
[28] H Eskandar A Sadollah A Bahreininejad and M HamdildquoWater cycle algorithm - A novel metaheuristic optimizationmethod for solving constrained engineering optimization prob-lemsrdquo Computers amp Structures vol 110-111 pp 151ndash166 2012
[29] A Sadollah H Eskandar A Bahreininejad and J H KimldquoWater cycle algorithm with evaporation rate for solving con-strained and unconstrained optimization problemsrdquo AppliedSoft Computing vol 30 pp 58ndash71 2015
[30] Q Luo C Wen S Qiao and Y Zhou ldquoDual-system water cyclealgorithm for constrained engineering optimization problemsrdquoin Proceedings of the Intelligent Computing Theories and Appli-cation 12th International Conference ICIC 2016 D-S HuangV Bevilacqua and P Premaratne Eds pp 730ndash741 SpringerInternational Publishing Lanzhou China August 2016
[31] A A Heidari R Ali Abbaspour and A Rezaee Jordehi ldquoAnefficient chaotic water cycle algorithm for optimization tasksrdquoNeural Computing and Applications vol 28 no 1 pp 57ndash852017
[32] M M al-Rifaie J M Bishop and T Blackwell ldquoInformationsharing impact of stochastic diffusion search on differentialevolution algorithmrdquoMemetic Computing vol 4 no 4 pp 327ndash338 2012
[33] T Hogg and C P Williams ldquoSolving the really hard problemswith cooperative searchrdquo in Proceedings of the National Confer-ence on Artificial Intelligence p 231 1993
[34] C Ding L Lu Y Liu and W Peng ldquoSwarm intelligence opti-mization and its applicationsrdquo in Proceedings of the AdvancedResearch on Electronic Commerce Web Application and Com-munication International Conference ECWAC 2011 G Shenand X Huang Eds pp 458ndash464 Springer Guangzhou ChinaApril 2011
[35] L Goel and V K Panchal ldquoFeature extraction through infor-mation sharing in swarm intelligence techniquesrdquo Knowledge-Based Processes in Software Development pp 151ndash175 2013
[36] Y Li Z-H Zhan S Lin J Zhang and X Luo ldquoCompetitiveand cooperative particle swarm optimization with informationsharingmechanism for global optimization problemsrdquo Informa-tion Sciences vol 293 pp 370ndash382 2015
[37] M Mavrovouniotis and S Yang ldquoAnt colony optimization withdirect communication for the traveling salesman problemrdquoin Proceedings of the 2010 UK Workshop on ComputationalIntelligence UKCI 2010 UK September 2010
[38] T Davie Fundamentals of Hydrology Taylor amp Francis 2008[39] J Southard An Introduction to Fluid Motions Sediment Trans-
port and Current-Generated Sedimentary Structures [Ph Dthesis] Massachusetts Institute of Technology Cambridge MAUSA 2006
[40] B R Colby Effect of Depth of Flow on Discharge of BedMaterialUS Geological Survey 1961
[41] M Bishop and G H Locket Introduction to Chemistry Ben-jamin Cummings San Francisco Calif USA 2002
[42] J M Wallace and P V Hobbs Atmospheric Science An Intro-ductory Survey vol 92 Academic Press 2006
[43] R Hill A First Course in CodingTheory Clarendon Press 1986[44] H Huang and Z Hao ldquoACO for continuous optimization
based on discrete encodingrdquo in Proceedings of the InternationalWorkshop on Ant Colony Optimization and Swarm Intelligencepp 504-505 2006
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpswwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 201
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of