research article comparative study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis...

17
Research Article Comparative Study of Metaheuristics for the Curve-Fitting Problem: Modeling Neurotransmitter Diffusion and Synaptic Receptor Activation Jesús Montes, Antonio LaTorre, Santiago Muelas, Ángel Merchán-Pérez, and José M. Peña DATSI, ETS de Ingenieros Inform´ aticos, Universidad Polit´ ecnica de Madrid, Campus de Montegancedo, 28660 Boadilla del Monte, Madrid, Spain Correspondence should be addressed to Jes´ us Montes; jmontes@fi.upm.es Received 22 October 2014; Revised 10 April 2015; Accepted 15 April 2015 Academic Editor: Jinde Cao Copyright © 2015 Jes´ us Montes et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Synapses are key elements in the information transmission in the nervous system. Among the different approaches to study them, the use of computational simulations is identified as the most promising technique. Simulations, however, do not provide generalized models of the underlying biochemical phenomena, but a set of observations, or time-series curves, displaying the behavior of the synapse in the scenario represented. Finding a general model of these curves, like a set of mathematical equations, could be an achievement in the study of synaptic behavior. In this paper, we propose an exploratory analysis in which selected curve models are proposed, and state-of-the-art metaheuristics are used and compared to fit the free coefficients of these curves to the data obtained from simulations. Experimental results demonstrate that several models can fit these data, though a deeper analysis from a biological perspective reveals that some are better suited for this purpose, as they represent more accurately the biological process. Based on the results of this analysis, we propose a set of mathematical equations and a methodology, adequate for modeling several aspects of biochemical synaptic behavior. 1. Introduction Most information in the mammalian nervous system flows through chemical synapses. ese are complex structures comprising a presynaptic element (an axon terminal) and a postsynaptic element (a dendritic spine, a dendritic shaſt, an axon, or a soma) separated by a narrow gap known as the synaptic cleſt (see Figure 1). e neurotransmitter is stored in synaptic vesicles located in the presynaptic terminal. For release to take place, the membrane of one or more vesicles must fuse with a region of the presynaptic membrane, the active zone, and lining the synaptic cleſt. On the opposite side, the postsynaptic membrane is populated by specific receptors. Multiple factors influence the diffusion of neurotransmit- ter molecules and their interaction with specific receptors [1–3]. e initial concentration of the released neurotrans- mitter in the extracellular space depends on the volume of the synaptic cleſt. e subsequent diffusion of neurotrans- mitter molecules outside the cleſt may be influenced by the geometrical characteristics of the membranes that surround the synaptic junction, and by the presence and concentration of transporter molecules. However, direct observation of the various synaptic events at the molecular and ultrastructural levels in vivo or in vitro is rather difficult, if not impossible. Simulation approaches are thus useful to assess the influence of different parameters on the behavior of the synapse (e.g., [4, 5]). Simulation approaches in neuroscience have considered different models, scales, and techniques, according to the phenomenon being studied. Biochemical processes, such as neurotransmitter diffusion, require Monte Carlo particle- based simulators like MCell [6, 7], ChemCell [8], or Smoldyn [9, 10]. ese simulation techniques allow computational neuroscientist to reproduce these biological processes in a manner that they can be thoroughly observed, systematically Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2015, Article ID 708131, 16 pages http://dx.doi.org/10.1155/2015/708131

Upload: others

Post on 08-Aug-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

Research ArticleComparative Study of Metaheuristics for the Curve-FittingProblem Modeling Neurotransmitter Diffusion and SynapticReceptor Activation

Jesuacutes Montes Antonio LaTorre Santiago Muelas Aacutengel Merchaacuten-Peacuterez and Joseacute M Pentildea

DATSI ETS de Ingenieros Informaticos Universidad Politecnica de Madrid Campus de Montegancedo28660 Boadilla del Monte Madrid Spain

Correspondence should be addressed to Jesus Montes jmontesfiupmes

Received 22 October 2014 Revised 10 April 2015 Accepted 15 April 2015

Academic Editor Jinde Cao

Copyright copy 2015 Jesus Montes et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Synapses are key elements in the information transmission in the nervous system Among the different approaches to studythem the use of computational simulations is identified as the most promising technique Simulations however do not providegeneralized models of the underlying biochemical phenomena but a set of observations or time-series curves displaying thebehavior of the synapse in the scenario represented Finding a general model of these curves like a set of mathematical equationscould be an achievement in the study of synaptic behavior In this paper we propose an exploratory analysis in which selected curvemodels are proposed and state-of-the-art metaheuristics are used and compared to fit the free coefficients of these curves to thedata obtained from simulations Experimental results demonstrate that several models can fit these data though a deeper analysisfrom a biological perspective reveals that some are better suited for this purpose as they represent more accurately the biologicalprocess Based on the results of this analysis we propose a set ofmathematical equations and amethodology adequate formodelingseveral aspects of biochemical synaptic behavior

1 Introduction

Most information in the mammalian nervous system flowsthrough chemical synapses These are complex structurescomprising a presynaptic element (an axon terminal) and apostsynaptic element (a dendritic spine a dendritic shaft anaxon or a soma) separated by a narrow gap known as thesynaptic cleft (see Figure 1) The neurotransmitter is storedin synaptic vesicles located in the presynaptic terminal Forrelease to take place the membrane of one or more vesiclesmust fuse with a region of the presynaptic membrane theactive zone and lining the synaptic cleft On the oppositeside the postsynaptic membrane is populated by specificreceptors

Multiple factors influence the diffusion of neurotransmit-ter molecules and their interaction with specific receptors[1ndash3] The initial concentration of the released neurotrans-mitter in the extracellular space depends on the volume of

the synaptic cleft The subsequent diffusion of neurotrans-mitter molecules outside the cleft may be influenced by thegeometrical characteristics of the membranes that surroundthe synaptic junction and by the presence and concentrationof transporter molecules However direct observation of thevarious synaptic events at the molecular and ultrastructurallevels in vivo or in vitro is rather difficult if not impossibleSimulation approaches are thus useful to assess the influenceof different parameters on the behavior of the synapse (eg[4 5])

Simulation approaches in neuroscience have considereddifferent models scales and techniques according to thephenomenon being studied Biochemical processes such asneurotransmitter diffusion require Monte Carlo particle-based simulators likeMCell [6 7] ChemCell [8] or Smoldyn[9 10] These simulation techniques allow computationalneuroscientist to reproduce these biological processes in amanner that they can be thoroughly observed systematically

Hindawi Publishing CorporationAbstract and Applied AnalysisVolume 2015 Article ID 708131 16 pageshttpdxdoiorg1011552015708131

2 Abstract and Applied Analysis

lowast

lowast

lowast

(a) Electron microscopy image

Neurotransmittervesicle

Presynapticterminal

Synaptic cleft

Active zone

Neurotransmitterdiffusion

Postsynaptic densitySynaptic receptor

(b) Schematic representation

Figure 1 Chemical synapses (a) Electron microphotograph of the cerebral cortex where three axon terminals (asterisks) establish synapseswith three dendritic elements showing clearly visible postsynaptic densities (arrows) (b) Simplifiedmodel of a chemical synapse highlightingits most significant parts

analyzed and easily adapted andor modified The results ofthese simulations can be studied to extract general conclu-sions and infer basic properties of the natural processes beingstudied

In the case of chemical synapses many aspects of theirbehavior during neurotransmitter release and diffusion canbe simulated Simulations however do not provide general-ized models of these series showing only the specific valuesobserved each time and in this case these are subject tothe stochastic nature of the Monte Carlo simulations usedto produce them Having general empirical models of thesetime series such as a set of mathematical equations could bean important achievement in the study of chemical synapticbehavior Finding these equations however is not a trivialtask

Having the raw simulation data at hand the first step infinding these equations is to identify the type ofmathematicalexpression we are looking for A reasonable approach toachieve this is to study the underlying physicochemicalprocesses that take place during synapse operation (egmolecular diffusion and receptor kinetics) and try to ana-lytically infer the synaptic equations from them This ishowever sometimes not possible (as is explained in detail inSection 21)When this is the case an exploratory analysis canbe attempted testing different general mathematical modelsagainst the empirical data and trying to find the one with thebest adjustment This becomes in essence a complex curve-fitting problem in which a set of general equations is fitted toexperimentally obtained data [11]

This problem is in its general form an optimizationtask in which different search strategies in the field ofmetaheuristics have been successfully applied in the pastThis type of search techniques with Evolutionary Algorithms(EAs) as one of the most relevant exponents have shownin the last decades their ability to deal with optimizationproblems of many different domains ranging from com-pletely theoretical mathematical functions (benchmarks) [12]

to real-world problems in many different domains biology[13ndash17] engineering [18ndash21] and logistics [22ndash25] to name afew Furthermore different metaheuristics have been previ-ously used in related curve fitting problems [26ndash29] whichmotivates the use of this type of algorithms for our particularproblem

2 Materials and Methods

21 Synaptic Curves We analyzed simulations based onsimplifiedmodels of excitatory synapses where AMPA recep-tors were present and the neurotransmitter involved wasglutamate (The AMPA receptor is a ionotropic receptorcommonly found in excitatory synapses using glutamate asneurotransmitter It is named from its specific agonist alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) Webased our study in a corpus of synaptic simulations verysimilar to the one presented in [11] Our corpus consistedof 500 different synaptic configurations each simulated 500times (to incorporate stochastic variability) resulting in a totalof 250000 simulations As in [11] geometrical parameterswere taken into account to incorporate synaptic variabilityand simulations were performed using the MCell softwarein the Magerit supercomputer [30] Each one of the 500synaptic configurations represented a unique synapse mor-phologically different form the others Simulations studiedneurotransmitter diffusion and synaptic receptor behaviorin each configuration After the synaptic simulations wereperformed two separated (but related) aspects of synapticbehavior were considered

(i) The average variation of neurotransmitter concen-tration in the synaptic cleft over time after a singlevesicle release

(ii) The average variation of the total number of synapticreceptors in the open state over time again aftera single neurotransmitter vesicle release (Molecular

Abstract and Applied Analysis 3

kinetics of synaptic receptors follow Markov-chain-likemodels with several possible states and transitionprobabilities depending on environmental factors andchemical reaction rate constants The most relevantstate in the AMPA receptor is the open state It isdirectly related with the electrical synaptic response[31])

These two aspects were plotted as time-series curves foreach of the 500 synaptic configurations resulting in a totalof 1000 curves (500 glutamate concentration curves and 500open AMPA receptor curves) each of them being an averageof 500 simulations of the same synaptic configuration Anexample of these curves can be seen in Figure 2 Thecurves obtained were consistent with previous studies [32ndash35] Each one of these 1000 curves represented the behaviorof a specific synapse morphologically different from theothers Each curve became a separate curve-fitting problemaddressed using the metaheuristics described in Section 22

211 Glutamate Concentration Curves In the case of theglutamate concentration (Figure 2(a)) all curves showedan initial peak (maximum concentration in the moment ofvesicle liberation) followed by what very closely resembleda typical exponential decay To understand this curve wereferred to Fickrsquos second law of diffusion (derived by theGerman physiologist Adolf Fick in 1855) which predicts howdiffusion causes the concentration of a substance to changewith time For two or more dimensions this law states that

120597120601119860 (119903 119905)

120597119905= 119863119860nabla2120601119860 (119903 119905) (1)

where 120601119860(119903 119905) is the concentration of a given substance 119860

(glutamate in our case) at location 119903 and time 119905 and 119863119860is

the coefficient of diffusion of substance 119860 This is also calledthe heat equation and has no analytic solution for our specificproblem due to the synaptic three-dimensional geometry(The heat equation can only be solved analytically for a fewidealized uni-dimensional problems) It is known howeverthat its solution must follow a typical exponential decay Inexponential decay processes the quantity 119873 of a substancedecreases in the following way

119889119873

119889119905= minus

1

120591119873 (2)

where 120591 is the mean lifetime or exponential time constant ofthe process We solve (2) as follows

119889119873

119889119905= minus

1

120591119873

119889119873

119873= minus

1

120591119889119905

(3)

We integer both sides

ln119873 = minus1

120591119905 + 119862 (4)

We now apply exp( ) to both sides

119890ln119873

= 119890minus119905120591+119862

119873 = 119890minus119905120591

119890119862

(5)

Since 119890119862 is constant we rename it to 1198730and we finally

obtain a solution to (2) in the following form

119873(119905) = 1198730119890minus119905120591

(6)

where 1198730is the initial quantity since at 119905 = 0 119890minus119905120591 = 1

forall120591 = 0 We know that Fickrsquos second law of diffusion (1)must follow an exponential decay (2) Therefore its solutionmust be in the form of (6) The neurotransmitter diffusesthrough a certain volume the synaptic cleft As explainedthe neurotransmitter concentration 120601

119860(119903 119905) in every point of

this volume changes following an exponential decay (6) Wepropose a new function Φ

119860(119905) as the average concentration

of neurotransmitter in this volume and we assume that italso follows an exponential decay We define Φ

119860(119905) with the

following equation

Φ119860 (119905) = 1198620119890

minus119905120591 (7)

where 1198620is the initial average concentration at 119905 = 0 Since

the number of neurotransmitter molecules released (vesiclesize) and the volume of the synaptic cleft are known (theyare part of the synaptic configuration simulation parameters)1198620can be easily calculated beforehand Please remember

that Φ119860(119905) is the average concentration of 119860 in a defined

volume opposed to 120601119860(119903 119905) which is the exact concentration

at position 119903 Φ119860(119905) is the value measured in our simulation

experiments (shown eg in Figure 2(a))The value of 120591 in (7) however cannot be analytically

calculated From the previously mentioned link between (1)and (2) we deduce a connection between each side of bothequations concluding that

(i) 120597120601119860(119903 119905)120597119905 relates to 119889119873119889119905 (left side of both equa-

tions)(ii) 119863

119860nabla2120601119860(119903 119905) relates to minus(1120591)119873 (right side of both

equations)

From the second item we deduce that the 120591 constantshould be somehow calculated using119863

119860and some unknown

aspects of the synaptic geometry which are factored insidethe nabla2120601

119860(119903 119905) term that cannot be calculated analytically for

an arbitrary 3D problem From 120591 and 119863119860we derive a new

coefficient 120592 so that1

120591= 119863119860

1

120592 (8)

The coefficient 120592 represents the effects of geometryseparated from the substance diffusion coefficient (119863

119860) For

our problem 120592 cannot be determined analytically and needsto be estimated using numerical andor statistical methodsWe incorporate 120592 in (7) and obtain

Φ119860 (119905) = 1198620119890

minus119863119860119905120592 (9)

4 Abstract and Applied Analysis

00012

00010

00008

00006

00004

00002

00000

0 1 2 3 4 5

Time (ms)

Glu

tam

ate c

once

ntra

tion

(molmiddotL

minus1)

(a) Glutamate concentration curve

0 1 2 3 4 5

Time (ms)

Ope

n A

MPA

rece

ptor

s

100

80

60

40

20

0

minus20

(b) Open AMPA receptors curve

Figure 2 Examples of synaptic curves after a neurotransmitter vesicle release (a) Variation of neurotransmitter (in this case glutamate)concentration through time inside the synaptic cleft (b) Variation in the total number of synaptic receptors (in this case AMPA) in the openstate through time

which is the final expression of our neurotransmitter (glu-tamate) concentration time series model The problem ofestimating the value of 120592 still remains and for this we needto return to our synaptic simulations Using curve-fittingmethods we can find the values of 120592 that most accuratelymodel the results observed during these experiments

212 AMPA Open Receptors Curves All open AMPA recep-tors curves showed a rapid climb to a single peak followedby a long tail slowly descending towards 0 (as shown inFigure 2(b)) The initial section of the curve (containing therapid ascent peak and descent) contains the most relevantinformation that is the amplitude of the peak and the time ittakes to reach it A general exploration of the simulation datashowed that these two characteristics are highly dependent onthe neurotransmitter concentration and synaptic geometry

As we have already established neurotransmitter con-centration cannot be predicted analytically (Fickrsquos secondlaw cannot be solved analytically for 2 or more dimen-sions) therefore open AMPA receptors curves will not beeither (since they are dependent on neurotransmitter con-centration) Moreover synaptic geometry variations can beextremely diverse making the task of inferring a mathemati-cal equation much harder

Therefore we adopted the exploratory analysis approachconsidering a set of selected general mathematical modelsand testing them against the simulation data We based thisset on a previous analysis presented in [11] Our work differsfrom this study in that in [11] this problem was tackled onlyby means of a single approximate technique which we usehere as a baseline to compare different global optimizationheuristics We also performed a subsequent analysis of theoptimization results taking into consideration the relevant

biological implications of our findings and proposing gener-alized synaptic curvemodels From [11] however we selectedthe best five curves as a starting point

(i) 4-by-4 degree polynomial rational function (9 coeffi-cients)

119910 =11990111199094+ 11990121199093+ 11990131199092+ 1199014119909 + 1199015

1199094 + 11990211199093 + 119902

21199092 + 119902

3119909 + 1199024

(10)

(ii) 8-term Fourier series (18 coefficients)

119910 = 1198860+

8

sum

119894=0

(119886119894cos (119894119909119908) + 119887119894 sin (119894119909119908)) (11)

(iii) 8-term Gauss series (25 coefficients)

119910 = 1198860+

8

sum

119894=1

119886119894119890minus((119909minus119887119894)119888119894)

2

(12)

(iv) 2-term exponential function (4 coefficients)

119910 = 119886119890119887119909+ 119888119890119889119909 (13)

(v) 9th degree polynomial (10 coefficients)

119910 = 1199010+

9

sum

119894=1

119901119894119909119894 (14)

To decide which of these curves best serves our purposewe need to test how well each of them can model theexperimental data we observed during simulations We needtherefore to fit every curve to the data and see how theyperform

Abstract and Applied Analysis 5

22 Algorithms Used in the Comparison This section reviewsthe considered algorithms for the previously described curvesfitting problem including the reference baseline algorithm(Nonlinear Least Squares) used in previous studies [11] andseveral metaheuristics (five Evolutionary Algorithms andtwo local searches) whose performance will be compared inSection 3

221 Baseline Algorithm Nonlinear Least Squares The Non-linear Least Squares (NLLS) algorithm [36 37] is frequentlyused in some forms of nonlinear regression and is availablein multiple mathematical and statistical software such asMATLAB [38] or R [39] Loosely speaking this algorithmtries to approximate the model by a linear one in thefirst place to further refine the parameters by successiveiterations This method was successfully used in a seminalwork [11] and thus has been selected as the baseline algorithmto test the performance of several metaheuristics which willbe reviewed in the following sections

222 Selected Metaheuristics

Genetic Algorithm Genetic Algorithms (GAs) [40] are one ofthe most famous and widely used nature-inspired algorithmsin black-box optimization The generality of its operatorsmakes it possible to use GAs both in the continuous andthe discrete domains For our experiments we have used aReal-Coded GA that uses the BLX-120572 crossover with alpha =05 and the Gaussian mutation [41] Different populationsizes and operator probabilities were tested as described inSection 3

Differential Evolution Differential Evolution (DE) [42] is oneof the most powerful algorithms used in continuous opti-mization Its flexibility and simplicity (it only has three freeparameters to configure) have gained DE a lot of popularityin recent years and it is being used both in its canonical formand in more advanced flavors (as we will see below) in manydifferent complex optimization scenarios For our experi-ments we have considered a DE algorithm with ExponentialCrossover and as for the GA different values have been triedfor its three free parameters population size (NP)119865 and CR

Self-Adaptive Differential EvolutionThis is one of the variantsof the aforementioned canonical DE algorithm The Self-Adaptive Differential Evolution algorithm proposed in [43]does not consider fixed 119865 and CR parameters but insteadallows the algorithm to adjust their values with certainprobability controlled by 120591

119865and 120591CR respectively

Generalized Opposition-Based Differential Evolution This isthe second variant of the DE algorithm used in our studyIn the Generalized Opposition-Based Differential Evolution(GODE) algorithm [44] the Opposition-Based Learningconcept is used which basically means that with someprobability the algorithm tries not only to evaluate thesolutions in the current population but also the oppositeones Apart from the commonDE parameters this algorithmadds a new parameter to control the probability of applyingthe opposite search

Solis and Wets Algorithm The well-known Solis and Wetsalgorithm [45] has also been included in our studyThis directsearchmethod performs a randomized local minimization ofa candidate solution with an adaptive step size It has severalparameters that rule the way the candidate solution is movedand how the step size is adapted All these parameters will besubject to parameter tuning as for the rest of the algorithms

MTS-LS1 Algorithm MTS-LS1 is the first of the three localsearches combined by the MTS algorithm [46] This algo-rithm tries to optimize each dimension of the problemindependently and has been successfully used in the past tosearch large scale global optimization problems In the sameline than the Solis and Wets algorithm several parameterscontrol the movements of the local search and all of themwill be adjusted

Covariance Matrix Adaptation Evolution Strategy TheCovarianceMatrix Adaptation Evolution Strategy (CMA-ES)[47] is an evolution strategy that adapts the full covariancematrix of a normal search Some of its advantages is thatit exhibits the same performance on an objective functionwhether or not a linear transformation has been appliedand its robustness against ill-conditioned and nonseparableproblems [48] To increase the probability of finding theglobal optimum the increasing population size IPOP-CMA-ES algorithm was proposed in [49] This algorithm uses arestart method with a successively increasing population sizestrategy each time the stopping criterion is satisfied Thisalgorithm was ranked first on the ldquoSpecial Session on Real-Parameter Optimizationrdquo held at the CEC 2005 Congressand has since been used with several optimization problemsFor the proposed experiments we have considered both theCMA-ES and the IPOP-CMA-ES with several values for itssigma parameter The active covariance matrix adaptationmechanism which actively decreases variances in especiallyunsuccessful directions to accelerate their convergence hasalso been tested with both algorithms

23 Comparison Metric To assess the accuracy of our curve-fitting solutions we based our study on the commonly usedcoefficient of determination (1198772) This is the proportion ofvariability in a data set that is accounted for by the statisticalmodel It provides a measure of how well future outcomes arelikely to be predicted by the model 1198772 usually has a valuebetween 0 and 1 (sometimes it can be less than 0) where 1indicates an exact fit to the reference data and a value less thanor equal to 0 indicates no fit at all

1198772 is calculated as follows given a reference data set 119883

with 119873 values 119909119894 each of which has an associated predicted

value 1199091015840119894 the total sum of squares (TSS) and the residual sum

of squares (RSS) are defined as

TSS = sum119894

(119909119894minus 119909)2

RSS = sum119894

(119909119894minus 1199091015840

119894)2

(15)

6 Abstract and Applied Analysis

And 1198772 is expressed as

1198772= 1 minus

RSSTSS

(16)

The coefficient of determination however does not takeinto account regression model complexity or number ofobservationsThis is particularly important in our case sincewe want to compare the performance of very different curvemodels for example the 2-term exponential function whichhas 4 coefficients against the 8-term Gauss series which has25 coefficients To incorporate these aspects we adopted theadjusted coefficient of determination (2) [50] which takesinto account the complexity of the regression model and thenumber of observations 2 is calculated as follows

2= 1 minus (

RSSTSS

times119873 minus 1

119873 minus 119898) (17)

where 119873 is the size of the sample set and 119898 is the numberof coefficients of the regression model As in the case of 1198772closer to 1means a better fit We used 2 as the fitness metricfor all our curve-fitting calculations

24 Parameter Tuning All the experimentation reportedin this paper has followed a fractional design based onorthogonal matrices according to the Taguchi method [51]In this method the concept of signal to noise ratio (SN ratio)is introduced for measuring the sensitivity of the qualitycharacteristic being investigated in a controlled manner tothose external influencing factors (noise factors) not undercontrolThe aim of the experiment is to determine the highestpossible SN ratio for the results since a high value of the SNratio implies that the signal is much higher than the randomeffects of the noise factors From the quality point of viewthere are three possible categories of quality characteristics(i) smaller is better (ii) nominal is best and (iii) biggeris better The obtained results fall in the ldquobigger is bettercategoryrdquo since the objective is to maximize the adjustmentof the model to the observed values measured as the 2 Forthis category the SN ratio estimate is defined in (18) where119899 denotes the total number of instances and 119910

1 1199102 119910

119899

the target values (the adjustment of the model in this case)Consider the following

SN = minus10 log(1119899

119899

sum

119905=1

1199102

119905) (18)

This method allows the execution of a limited numberof configurations and still reports significant information onthe best combination of parameter values In particular amaximum of 27 different configurations were tested for eachalgorithm on the whole set of models and a subset of thecurves (the exact number of configurations will depend onthe number of parameters to be tuned) In Section 31 wereport the tested values for each algorithm and highlightthose reported by the Taguchi method as the best ones fromthe SN ratio point of view

25 Statistical Validation Tovalidate the results obtained andthe conclusions derived from them it is very important touse the appropriate statistical tests for this purpose In thispaper we conducted a thorough statistical validation that isdiscussed in this section

First the nonparametric Friedmanrsquos test [52] was used todetermine if significant differences exist among the perfor-mance of the algorithms involved in the experimentation bycomparing their median values If such differences existedthen we continued with the comparison and used somepost-hoc tests In particular we reported the 119901 values fortwo nonparametric post-hoc tests Friedman [52ndash54] andWilcoxon tests [55] Finally as multiple algorithms areinvolved in the comparison adjusted 119901 values should ratherbe considered in order to control the familywise error rate Inour experimentation we considered Holmrsquos procedure [54]which has been successfully used in the past in several studies[12 56]

Apart from the 119901 values reported by the statistical testswe also reported the following values for each algorithm

(i) Overall ranking according to the Friedman test wecomputed the relative ranking of each algorithmaccording to its mean performance for each func-tion and reported the average ranking computedthrough all the functions Given the following meanperformance in a benchmark of three functions foralgorithms 119860 and 119861 119860 = (000119890 + 00 127119890 +

01 354119890minus03) 119861 = (372119890+01 042119890+00 219119890minus07)their relative ranking would be Ranks

119860= (1 2 2)

Ranks119861

= (2 1 1) and thus their correspondingaverage rankings are 119877

119860= 167 and 119877

119861= 133

(ii) nWins this is the number of other algorithms forwhich each algorithm is statistically better minus thenumber of algorithms for which each algorithm isstatistically worse according to the Wilcoxon SignedRank Test in a pair-wise comparison [57]

This meticulous validation procedure allowed us to pro-vide solid conclusions based on the results obtained in theexperimentation

3 Results and Discussion

We have conducted a thorough experimentation to elucidateif the selected metaheuristics can be successfully used inthe curve-fitting problem For this purpose we have takeninto account both glutamate concentration and AMPA openreceptors curves problems and in the case of the latter thefive different curves presented in Section 212

In the first place a parameter tuning of the sevenselected metaheuristics has been carried out whose resultsare presented in Section 31 and then final runs with the bestconfiguration of each algorithm are done whose results arepresented in Sections 32-33 and their significance accordingto the aforementioned statistical validation procedure is dis-cussed Finally we provide a biological interpretation of theresults and discuss the convenience of using one particularcurve model instead of the others

Abstract and Applied Analysis 7

31 Parameter Tuning of the Selected Metaheuristics Asdescribed in Section 24 to conduct our experimentationwe have followed a fractional design based on orthogonalmatrices according to the Taguchi method This methodallows the execution of a limited number of configurations (amaximum of 27 in our case) yet yielding significant resultsFurthermore to avoid any overfitting to the experimentaldata set the parameter tuning of the algorithms was done ona subset of curves (10 randomly chosen curves) prior to usingthe selected values in the overall comparison Table 1 containsfor each algorithm all their relevant parameters highlightingin bold the final selected values among all the tested onesaccording to the measures reported by the Taguchi method

Once the best configuration for each algorithm has beendetermined we ran each of them on both synaptic curvesproblems for all the models and curves and conducting 25repetitions per curve In the case of the selected metaheuris-tics we constrained search space to the following interval[minus500 500] to allow faster convergence On the other handto avoid any bias in the results we ran the baseline algorithmboth constrained to the same interval and unconstrained

32 Glutamate Concentration Curves Problem As we statedbefore in the 500 glutamate concentration curves the onlyparameter that needs to be optimized is the 120592 coefficient ofthe proposed exponential decay function Thus it is a simpleoptimization problem that should be solved with little issuesby most algorithms as it is shown in Table 2 In this table wecan observe that most algorithms are able to fit the curve witha precision of 99 The only three algorithms not reachingthat precision get trapped a few times in low quality localoptima which prevents them to reach the same accuracyThis is especially the case of the NLLS algorithm both in theconstrained and unconstrained versions

Due to the nonexistent differences in the performance ofmost of the algorithms the statistical validation did not revealany significant differences except for the two versions of thebaseline algorithm This first result is a good start point thatsuggests that more advanced metaheuristics can deal withthese problems more effectively than the NLLS algorithmused in previous studies [11]

33 AMPAOpen Receptors Curves Problem To providemoreinsight on the results obtained for the AMPA open receptorscurves problem we have decided to report the results andconduct the statistical validation on each of the curvesapproximation models individually and then to carry outan overall comparison considering all the models togetheras a large validation data set The results show that thereare three algorithms (IPOP-CMA-ES Self-Adaptive DE andDE) which systematically rank among the best algorithmsand in most of the cases are significantly better than thebaseline algorithm NLLS

331 Results with the 4-by-4 Degree Polynomial RationalFunction The results for the 4-by-4 degree polynomial ratio-nal function are reported in Table 5 As can be observedthe average attained precision is very high for most of the

Table 1 Parameter tuning of the selected metaheuristics

Parameter values of GApopSize 100 200 400pcx 01 05 09pm 001 005 01

Parameter values of DEpopSize 25 50 100119865 01 05 09CR 01 05 09

Parameter values of Self-Adaptive DEpopSize 25 50 100120591119865

005 01 02120591CR 005 01 02

Parameter values of GODEpopSize 25 50 100119865 01 05 09CR 01 05 09godeProb 02 04 08

Parameter values of Solis and WetsmaxSuccess 2 5 10maxFailed 1 3 5adjustSuccess 2 4 6adjustFailed 025 05 075delta 06 12 24

Parameter values of MTS-LS1(initial) SR 50 of the search space(min) SR 1119890 minus 14

adjustFailed 2 3 5adjustMin 25 5 10moveLeft 025 05 1moveRight 025 05 1

Parameter values of CMA-ES120590 001 01 1Active CMA no yesCMA-ES variant CMA-ES IPOP-CMA-ES

Table 2Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisonswith the other algorithms on the glutamateconcentration problem

Mean fitness Ranking nWinsSelf-Adaptive DE 990119864 minus 01 416 3DE 990119864 minus 01 416 3GA 990119864 minus 01 416 3Solis and Wets 990119864 minus 01 416 3GODE 990119864 minus 01 416 3IPOP-CMA-ES 990119864 minus 01 416 3MTS-LS1 983119864 minus 01 423 minus4NLLS constrained 953119864 minus 01 791 minus7NLLS unconstrained 953119864 minus 01 791 minus7

algorithms (with the exception of the Solis and Wets algo-rithm that alternates good and poor runs thus obtaining a low

8 Abstract and Applied Analysis

average fitness) especially in the case of the top three algo-rithms (IPOP-CMA-ES Self-Adaptive DE and DE) whichreach a precision as high as 999 The 119901 values reported bythe statistical validation (Table 6 for unadjusted 119901 values andTable 7 for the adjusted ones) reveal significant differencesbetween the best performing algorithm (IPOP-CMA-ES) andall the other algorithms except for Self-Adaptive DE andDE according to the Friedman test (Wilcoxon test doesreveal significant differences) The prevalence of these threealgorithms will be a trend for most of the problems as resultsbe will confirmed in the following sections

332 Results with the 8-Term Fourier Series For the 8-termFourier series results are similar in terms of the dominantalgorithms (see Table 8) In this case the Self-Adaptive DEobtained the highest ranking number of wins and averageprecision followed by IPOP-CMA-ES and the two remainingDE alternatives With this function the attained precisionhas slightly decreased (from 72 of the DE algorithm to94of the Self-AdaptiveDE) but again severalmetaheuristics(all the population-based algorithms which also includes theGA) have outperformed the reference NLLS algorithm (inboth constrained and unconstrained versions) This could bedue to the highest complexity of the problem (18 dimensionscompared to the 9 of the previous model) Also in thisproblem the performance of the two local searches (Solisand Wets and MTS-LS1) importantly degrades and this willalso be a trend for the remainder problems The statisticalvalidation whose results are presented in Tables 9 and10 found significant differences between the Self-AdaptiveDE algorithm and all the other algorithms for both tests(Friedman and Wilcoxon)

333 Results with the 8-Term Gauss Series The results forthis model break the trend started in the previous two casesthat follows in the remaining two functions Table 11 depictsthe results for the 8-term Gauss series As can be seen thebest algorithm is again the IPOP-CMA-ES algorithm whichagain reaches a very high precision of 99 However Self-Adaptive DE and DE perform poorly this time in spite of theproblem not being much bigger (25 parameters against 18)This means that it is probably the nature of the componentsof the Gauss series what makes the problem harder to solvefor DE-based algorithms (it is a summation of squaredexponential terms with two correlated coefficients whoseinterdependenciesmight be better captured by the adaptationof the covariance matrix in IPOP-CMA-ES) Surprisinglythe NLLS approaches obtain very good results with thismodel (93 and 97 precision for both constrained andunconstrained resp) It is also interesting to note that theSolis andWets algorithm was unable to converge to any goodsolution in any run Finally the statistical validation whosedetailed information is given in Tables 12 and 13 reportssignificant differences between the IPOP-CMA-ES algorithmand all the others with both statistical tests

334 Results with the 2-Term Exponential Function In thecase of the 2-term exponential function both the Self-Adaptive DE and the DE algorithms obtain very goodresults the former being the best algorithm in this problem(with a precision of more than 99 and the highest rank-ing and number of wins) As in previous cases (with theexception of the 8-term Gauss series) the NLLS algorithmperforms poorly in comparison with most other techniquesAnother interesting thing on this model is that both localsearches (MTS-LS1 and Solis and Wets) greatly improvedtheir performance compared with other problems In thiscase this improvement should be probably due to the smallproblem size that allows an easier optimization componentby component Finally the statistical validation (Tables 15and 16) reports significant differences between the referencealgorithm (Self-Adaptive DE) and all the other algorithmsexcept for the DE algorithm in the Friedman test

335 Results with the 9th Degree Polynomial The last func-tion used in the experimentation the 9th degree polynomialis an interesting case as we can classify studied algorithmsinto two groups those that converge to (what seems) astrong attractor and those that are unable to find any goodsolution In the first group we have the three aforementionedalgorithms that usually exhibit the best performance IPOP-CMA-ES Self-Adaptive DE and DE plus both NLLS con-figurations Consequently the second group is made up ofthe GA the GODE algorithm and the two local searchesThe algorithms in the first group normally converge withsome unsuccessful runs in the case of NLLS constrainedand IPOP-CMA-ES to the same attractor and thus theiraverage precision and ranking is very similar (Table 17) Smallvariations on the fitness introduce some differences in thenumber of wins although they are not very important Itshould be remarked that in contrast to what happens withthe other four models none of the selected algorithms isable to reach a high precision with this problem (a highestvalue of 84 compared to values in the range of 94 to99) This probably means that a 9th degree polynomial isnot the best curve to fit the experimental data which makessense if we consider the exponential decay processes takingpart in the biological process Section 34 will discuss thisissue in detail Although the differences are very small interms of fitness among the best algorithms the statisticalvalidation is able to find significant differences between theunconstrained configuration of the NLLS algorithm and allthe other algorithms with the Wilcoxon test (see Tables 18and 19 for details) However Friedmanrsquos test is unable to findsignificant differences betweenNLLSunconstrained and Self-Adaptive DE DE and NLLS constrained

336 Overall Comparison In this section we offer an overallcomparison of the algorithms for the five AMPA openreceptors curves problems To make this comparison weconsidered the results on the five problems altogether thuscomparing the 2500 fitness values at the same time Table 20summarizes the results From these data it is clear toconclude that the IPOP-CMA-ES algorithm obtains the best

Abstract and Applied Analysis 9G

luta

mat

e con

cent

ratio

n (m

olmiddotL

minus1)

0005

0004

0003

0002

0001

0000

00 02 04 06 08 10

Time (ms)

Figure 3 Example of curve-fit solution for the glutamate concen-tration problem The continuous line shows a solution obtained byfitting the exponential decaymodel proposed with the Self-AdaptiveDE algorithm The dashed line shows the reference data obtainedfrom biochemical simulations For this example 2 = 0990

results with an average precision of 94 the best overallranking and the highest number of wins being the bestalgorithm in all the pairwise comparisons Following IPOP-CMA-ES two DE variants (Self-Adaptive DE and classic DE)take places two and three in the ranking It is importantto note that three of the considered metaheuristics are ableto outperform the NLLS algorithm which was the base ofthe previous work in [11] It is also relevant to the poorperformance of the two considered local searches whichseem to be inadequate for this type of problems Finallythe same statistical validation as for the individual problemshas been followed and the results reported in Tables 21 and22 confirm the superior performance of the IPOP-CMA-ES algorithm obtaining significant differences with all thealgorithms and both statistical tests

34 Biological Interpretation of the Results To understand thebiological implications of this study we have to consider eachof the two curve-fitting problems independently (glutamateconcentration and AMPA open receptors)

341 Glutamate Concentration Curves In the case of theglutamate concentration curves the results shown in Tables2 3 and 4 illustrate two important aspects Firstly the highfitness values obtained with many optimization techniques(most of them around 99) prove that the glutamate dif-fusion equation proposed in Section 21 (equation (9)) isas expected a very accurate model of this phenomenonThis equation therefore can be efficiently used to interpretand predict the glutamate diffusion process given that theadequate means for calculating the 120592 coefficient are usedSecondly selecting the appropriate curve-fitting technique isalso an important aspect in this case since general purposecommonly used algorithms (such as the NLLS used in [11])

Table 3 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the glutamate concentration problem

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 657119864 minus 01 898119864 minus 03

NLLS constrained 000119864 + 00 147119864 minus 70

NLLS unconstrained 000119864 + 00 147119864 minus 70

Table 4 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the glutamate concentra-tion problem

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 100119864 + 00 539119864 minus 02

NLLS constrained 000119864 + 00radic

118119864 minus 69radic

NLLS unconstrained 000119864 + 00radic

118119864 minus 69radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

are now known to produce suboptimal results Figure 3 showsan example of a glutamate concentration curve solution (Thesynapse configuration used here had an AMPA receptor con-centration of 1967molecules120583m2 a glutamate transporterconcentration of 11560molecules120583m2 a postsynaptic lengthof 229 nm a total apposition length of 305 nm and a 20 nmwide synaptic cleft For more details on these parametersplease refer to [11]) In this example the glutamate initialconcentration was119862

0= 4749times10

minus3molL and its coefficientof diffusion 119863

119860= 033 120583m2ms The curve-fitting solution

produced 120592 = 6577 times 10minus6 resulting in the following

equation

Φ119860 (119905) = 4749 times 10

minus3times 119890minus033times119905(6577times10

minus6) (19)

It is important to remember that this equation is thesolution for one particular synaptic configuration Each oneof the total 500 configurations produced a different glutamateconcentration curve from which a different curve-fittingsolution (different 120592 value) was obtained

342 AMPA Open Receptors Curves The case of the openAMPA receptor curves requires further study Optimizationresults (Tables 5 to 22) show that when using the appropriatemetaheuristic most mathematical equations studied can befitted to the experimental data with excellent numericalresults (all curves can be fitted with more than 90 accuracywith the exception of the 9th degree polynomial) The five

10 Abstract and Applied Analysis

Table 5Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 4-by-4degree polynomial rational function

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 186 8Self-Adaptive DE 999119864 minus 01 208 5DE 999119864 minus 01 209 5GA 935119864 minus 01 486 2MTS-LS1 900119864 minus 01 604 0GODE 918119864 minus 01 604 minus2NLLS constrained 860119864 minus 01 704 minus4Solis and Wets 430119864 minus 01 717 minus6NLLS unconstrained 843119864 minus 01 784 minus8

Table 6 Raw 119901 values (IPOP-CMA-ES is the control algorithm) forthe 4-by-4 degree polynomial rational function

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 214119864 minus 01 203119864 minus 12

DE 190119864 minus 01 325119864 minus 14

GA 000119864 + 00 552119864 minus 84

MTS-LS1 000119864 + 00 631119864 minus 84

GODE 000119864 + 00 540119864 minus 84

NLLS constrained 000119864 + 00 199119864 minus 83

Solis and Wets 000119864 + 00 160119864 minus 86

NLLS unconstrained 000119864 + 00 591119864 minus 84

Table 7 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomialrational function

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 380119864 minus 01 203119864 minus 12

radic

DE 380119864 minus 01 650119864 minus 14radic

GA 000119864 + 00radic

378119864 minus 83radic

MTS-LS1 000119864 + 00radic

378119864 minus 83radic

GODE 000119864 + 00radic

378119864 minus 83radic

NLLS constrained 000119864 + 00radic

596119864 minus 83radic

Solis and Wets 000119864 + 00radic

128119864 minus 85radic

NLLS unconstrained 000119864 + 00radic

378119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

mathematical models studied are compared in Table 23selecting for them the results provided by the best opti-mization technique in each case As explained most curveequations produce very accurate results (only the 9th degreepolynomial falls clearly behind) although the number ofwinsfor equations with similar average fitness may differ due tosmall variations on individual executions As a consequenceof this the question of which model is best suited in this caserequires deeper analysis

From a strictly numerical point of view the 8-term Gaussseries fitted with the IPOP-CMA-ES algorithm produces thebest results so it is the first logical choice In additionwe consider the nature of the biological process modeled

Table 8Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 8-termFourier series

Mean fitness Ranking nWinsSelf-Adaptive DE 944119864 minus 01 126 8IPOP-CMA-ES 919119864 minus 01 191 6GODE 813119864 minus 01 346 4DE 721119864 minus 01 434 2GA 699119864 minus 01 479 0NLLS unconstrained 620119864 minus 01 546 minus2MTS-LS1 416119864 minus 01 729 minus4NLLS constrained 387119864 minus 01 749 minus6Solis and Wets 000119864 + 00 900 minus8

Table 9 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 8-term Fourier series

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueIPOP-CMA-ES 167119864 minus 04 589119864 minus 14

GODE 000119864 + 00 158119864 minus 83

DE 000119864 + 00 109119864 minus 83

GA 000119864 + 00 867119864 minus 82

NLLS unconstrained 000119864 + 00 808119864 minus 83

MTS-LS1 000119864 + 00 643119864 minus 84

NLLS constrained 000119864 + 00 636119864 minus 84

Solis and Wets 000119864 + 00 621119864 minus 84

Table 10 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueIPOP-CMA-ES 167119864 minus 04

radic589119864 minus 14

radic

GODE 000119864 + 00radic

632119864 minus 83radic

DE 000119864 + 00radic

544119864 minus 83radic

GA 000119864 + 00radic

173119864 minus 81radic

NLLS unconstrained 000119864 + 00radic

242119864 minus 82radic

MTS-LS1 000119864 + 00radic

497119864 minus 83radic

NLLS constrained 000119864 + 00radic

497119864 minus 83radic

Solis and Wets 000119864 + 00radic

497119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(AMPA synaptic receptor activation after glutamate release)Although it cannot be modeled analytically (as explained inSection 21) we know that a key parameter of this processis neurotransmitter concentration From Fickrsquos second lawof diffusion we know that this concentration must follow anexponential decay so it is logical to assume that some sortof exponential term has to be present in the AMPA openreceptors curve This favors the 8-term Gauss series and 2-term exponential function as the most biologically feasibleequations since both contain exponential components

To continue our analysis we performed an exploratorygraphical study of the curve solutions provided by thedifferent optimization algorithms Figure 4 shows an exampleof curve-fit for every equation fitted using the corresponding

Abstract and Applied Analysis 11

Table 11 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the8-term Gauss series

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 100 8NLLS unconstrained 973119864 minus 01 223 6NLLS constrained 931119864 minus 01 282 4GA 780119864 minus 01 395 2GODE 524119864 minus 01 500 0Self-Adaptive DE 321119864 minus 01 679 minus3MTS-LS1 323119864 minus 01 685 minus3DE 297119864 minus 01 735 minus6Solis and Wets 000119864 + 00 900 minus8

Table 12 Raw119901 values (IPOP-CMA-ES is the control algorithm) forthe 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12 190119864 minus 83

NLLS constrained 000119864 + 00 133119864 minus 83

GA 000119864 + 00 622119864 minus 84

GODE 000119864 + 00 630119864 minus 84

Self-Adaptive DE 000119864 + 00 628119864 minus 84

MTS-LS1 000119864 + 00 632119864 minus 84

DE 000119864 + 00 629119864 minus 84

Solis and Wets 000119864 + 00 475119864 minus 111

Table 13 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12

radic435119864 minus 83

radic

NLLS constrained 000119864 + 00radic

435119864 minus 83radic

GA 000119864 + 00radic

435119864 minus 83radic

GODE 000119864 + 00radic

435119864 minus 83radic

Self-Adaptive DE 000119864 + 00radic

435119864 minus 83radic

MTS-LS1 000119864 + 00radic

435119864 minus 83radic

DE 000119864 + 00radic

435119864 minus 83radic

Solis and Wets 000119864 + 00radic

380119864 minus 110radic

radicmeans that there are statistical differences with significance level 120572 = 005

best algorithm for each case according to our experiments (Asin the glutamate concentration curve example the synapseconfiguration used here had an AMPA receptor concentra-tion of 1967molecules120583m2 a glutamate transporter con-centration of 11560molecules120583m2 a postsynaptic length of229 nm a total apposition length of 305 nm and a 20 nmwidesynaptic cleft For more details on these parameters pleaserefer to [11]) This study revealed several new findings

(1) The 8-termGauss 4-by-4 degree polynomial rationaland 2-term exponential functions are clearly the bestcandidates for modeling the AMPA open receptorscurve This is consistent with the previously men-tioned numerical results (Table 23)

Table 14 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the2-term exponential function

Mean fitness Ranking nWinsSelf-Adaptive DE 997119864 minus 01 169 8DE 996119864 minus 01 186 6IPOP-CMA-ES 997119864 minus 01 246 4Solis and Wets 732119864 minus 01 484 2MTS-LS1 627119864 minus 01 543 0NLLS unconstrained 620119864 minus 01 573 minus2NLLS constrained 347119864 minus 01 725 minus4GODE 225119864 minus 01 768 minus6GA 165119864 minus 01 806 minus8

Table 15 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 2-term exponential function

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 338119864 minus 01 611119864 minus 10

IPOP-CMA-ES 900119864 minus 06 979119864 minus 54

Solis and Wets 000119864 + 00 631119864 minus 84

MTS-LS1 000119864 + 00 626119864 minus 84

NLLS unconstrained 000119864 + 00 631119864 minus 84

NLLS constrained 000119864 + 00 632119864 minus 84

GODE 000119864 + 00 604119864 minus 84

GA 000119864 + 00 526119864 minus 84

Table 16 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 2-term exponentialfunction

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 338119864 minus 01 611119864 minus 10

radic

IPOP-CMA-ES 180119864 minus 05radic

196119864 minus 53radic

Solis and Wets 000119864 + 00radic

423119864 minus 83radic

MTS-LS1 000119864 + 00radic

423119864 minus 83radic

NLLS unconstrained 000119864 + 00radic

423119864 minus 83radic

NLLS constrained 000119864 + 00radic

423119864 minus 83radic

GODE 000119864 + 00radic

423119864 minus 83radic

GA 000119864 + 00radic

421119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(2) The curve-fitting calculated for the 8-term Gaussseries presents an abnormal perturbation near thecurve peak (see the detail in Figure 4(a)) This seemsto be small enough so the overall fitness is not affectedbut implies that apparently the adjusted curves modelthe synaptic phenomenon in a less realistic way thanother options studied

(3) The 9th degree polynomial produces low quality solu-tions clearly surpassed by the first three equations

(4) The 8-term Fourier series seems to be overfitted tothe training dataThenumerical results are apparentlygood which indicates that the curve points tested

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

2 Abstract and Applied Analysis

lowast

lowast

lowast

(a) Electron microscopy image

Neurotransmittervesicle

Presynapticterminal

Synaptic cleft

Active zone

Neurotransmitterdiffusion

Postsynaptic densitySynaptic receptor

(b) Schematic representation

Figure 1 Chemical synapses (a) Electron microphotograph of the cerebral cortex where three axon terminals (asterisks) establish synapseswith three dendritic elements showing clearly visible postsynaptic densities (arrows) (b) Simplifiedmodel of a chemical synapse highlightingits most significant parts

analyzed and easily adapted andor modified The results ofthese simulations can be studied to extract general conclu-sions and infer basic properties of the natural processes beingstudied

In the case of chemical synapses many aspects of theirbehavior during neurotransmitter release and diffusion canbe simulated Simulations however do not provide general-ized models of these series showing only the specific valuesobserved each time and in this case these are subject tothe stochastic nature of the Monte Carlo simulations usedto produce them Having general empirical models of thesetime series such as a set of mathematical equations could bean important achievement in the study of chemical synapticbehavior Finding these equations however is not a trivialtask

Having the raw simulation data at hand the first step infinding these equations is to identify the type ofmathematicalexpression we are looking for A reasonable approach toachieve this is to study the underlying physicochemicalprocesses that take place during synapse operation (egmolecular diffusion and receptor kinetics) and try to ana-lytically infer the synaptic equations from them This ishowever sometimes not possible (as is explained in detail inSection 21)When this is the case an exploratory analysis canbe attempted testing different general mathematical modelsagainst the empirical data and trying to find the one with thebest adjustment This becomes in essence a complex curve-fitting problem in which a set of general equations is fitted toexperimentally obtained data [11]

This problem is in its general form an optimizationtask in which different search strategies in the field ofmetaheuristics have been successfully applied in the pastThis type of search techniques with Evolutionary Algorithms(EAs) as one of the most relevant exponents have shownin the last decades their ability to deal with optimizationproblems of many different domains ranging from com-pletely theoretical mathematical functions (benchmarks) [12]

to real-world problems in many different domains biology[13ndash17] engineering [18ndash21] and logistics [22ndash25] to name afew Furthermore different metaheuristics have been previ-ously used in related curve fitting problems [26ndash29] whichmotivates the use of this type of algorithms for our particularproblem

2 Materials and Methods

21 Synaptic Curves We analyzed simulations based onsimplifiedmodels of excitatory synapses where AMPA recep-tors were present and the neurotransmitter involved wasglutamate (The AMPA receptor is a ionotropic receptorcommonly found in excitatory synapses using glutamate asneurotransmitter It is named from its specific agonist alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) Webased our study in a corpus of synaptic simulations verysimilar to the one presented in [11] Our corpus consistedof 500 different synaptic configurations each simulated 500times (to incorporate stochastic variability) resulting in a totalof 250000 simulations As in [11] geometrical parameterswere taken into account to incorporate synaptic variabilityand simulations were performed using the MCell softwarein the Magerit supercomputer [30] Each one of the 500synaptic configurations represented a unique synapse mor-phologically different form the others Simulations studiedneurotransmitter diffusion and synaptic receptor behaviorin each configuration After the synaptic simulations wereperformed two separated (but related) aspects of synapticbehavior were considered

(i) The average variation of neurotransmitter concen-tration in the synaptic cleft over time after a singlevesicle release

(ii) The average variation of the total number of synapticreceptors in the open state over time again aftera single neurotransmitter vesicle release (Molecular

Abstract and Applied Analysis 3

kinetics of synaptic receptors follow Markov-chain-likemodels with several possible states and transitionprobabilities depending on environmental factors andchemical reaction rate constants The most relevantstate in the AMPA receptor is the open state It isdirectly related with the electrical synaptic response[31])

These two aspects were plotted as time-series curves foreach of the 500 synaptic configurations resulting in a totalof 1000 curves (500 glutamate concentration curves and 500open AMPA receptor curves) each of them being an averageof 500 simulations of the same synaptic configuration Anexample of these curves can be seen in Figure 2 Thecurves obtained were consistent with previous studies [32ndash35] Each one of these 1000 curves represented the behaviorof a specific synapse morphologically different from theothers Each curve became a separate curve-fitting problemaddressed using the metaheuristics described in Section 22

211 Glutamate Concentration Curves In the case of theglutamate concentration (Figure 2(a)) all curves showedan initial peak (maximum concentration in the moment ofvesicle liberation) followed by what very closely resembleda typical exponential decay To understand this curve wereferred to Fickrsquos second law of diffusion (derived by theGerman physiologist Adolf Fick in 1855) which predicts howdiffusion causes the concentration of a substance to changewith time For two or more dimensions this law states that

120597120601119860 (119903 119905)

120597119905= 119863119860nabla2120601119860 (119903 119905) (1)

where 120601119860(119903 119905) is the concentration of a given substance 119860

(glutamate in our case) at location 119903 and time 119905 and 119863119860is

the coefficient of diffusion of substance 119860 This is also calledthe heat equation and has no analytic solution for our specificproblem due to the synaptic three-dimensional geometry(The heat equation can only be solved analytically for a fewidealized uni-dimensional problems) It is known howeverthat its solution must follow a typical exponential decay Inexponential decay processes the quantity 119873 of a substancedecreases in the following way

119889119873

119889119905= minus

1

120591119873 (2)

where 120591 is the mean lifetime or exponential time constant ofthe process We solve (2) as follows

119889119873

119889119905= minus

1

120591119873

119889119873

119873= minus

1

120591119889119905

(3)

We integer both sides

ln119873 = minus1

120591119905 + 119862 (4)

We now apply exp( ) to both sides

119890ln119873

= 119890minus119905120591+119862

119873 = 119890minus119905120591

119890119862

(5)

Since 119890119862 is constant we rename it to 1198730and we finally

obtain a solution to (2) in the following form

119873(119905) = 1198730119890minus119905120591

(6)

where 1198730is the initial quantity since at 119905 = 0 119890minus119905120591 = 1

forall120591 = 0 We know that Fickrsquos second law of diffusion (1)must follow an exponential decay (2) Therefore its solutionmust be in the form of (6) The neurotransmitter diffusesthrough a certain volume the synaptic cleft As explainedthe neurotransmitter concentration 120601

119860(119903 119905) in every point of

this volume changes following an exponential decay (6) Wepropose a new function Φ

119860(119905) as the average concentration

of neurotransmitter in this volume and we assume that italso follows an exponential decay We define Φ

119860(119905) with the

following equation

Φ119860 (119905) = 1198620119890

minus119905120591 (7)

where 1198620is the initial average concentration at 119905 = 0 Since

the number of neurotransmitter molecules released (vesiclesize) and the volume of the synaptic cleft are known (theyare part of the synaptic configuration simulation parameters)1198620can be easily calculated beforehand Please remember

that Φ119860(119905) is the average concentration of 119860 in a defined

volume opposed to 120601119860(119903 119905) which is the exact concentration

at position 119903 Φ119860(119905) is the value measured in our simulation

experiments (shown eg in Figure 2(a))The value of 120591 in (7) however cannot be analytically

calculated From the previously mentioned link between (1)and (2) we deduce a connection between each side of bothequations concluding that

(i) 120597120601119860(119903 119905)120597119905 relates to 119889119873119889119905 (left side of both equa-

tions)(ii) 119863

119860nabla2120601119860(119903 119905) relates to minus(1120591)119873 (right side of both

equations)

From the second item we deduce that the 120591 constantshould be somehow calculated using119863

119860and some unknown

aspects of the synaptic geometry which are factored insidethe nabla2120601

119860(119903 119905) term that cannot be calculated analytically for

an arbitrary 3D problem From 120591 and 119863119860we derive a new

coefficient 120592 so that1

120591= 119863119860

1

120592 (8)

The coefficient 120592 represents the effects of geometryseparated from the substance diffusion coefficient (119863

119860) For

our problem 120592 cannot be determined analytically and needsto be estimated using numerical andor statistical methodsWe incorporate 120592 in (7) and obtain

Φ119860 (119905) = 1198620119890

minus119863119860119905120592 (9)

4 Abstract and Applied Analysis

00012

00010

00008

00006

00004

00002

00000

0 1 2 3 4 5

Time (ms)

Glu

tam

ate c

once

ntra

tion

(molmiddotL

minus1)

(a) Glutamate concentration curve

0 1 2 3 4 5

Time (ms)

Ope

n A

MPA

rece

ptor

s

100

80

60

40

20

0

minus20

(b) Open AMPA receptors curve

Figure 2 Examples of synaptic curves after a neurotransmitter vesicle release (a) Variation of neurotransmitter (in this case glutamate)concentration through time inside the synaptic cleft (b) Variation in the total number of synaptic receptors (in this case AMPA) in the openstate through time

which is the final expression of our neurotransmitter (glu-tamate) concentration time series model The problem ofestimating the value of 120592 still remains and for this we needto return to our synaptic simulations Using curve-fittingmethods we can find the values of 120592 that most accuratelymodel the results observed during these experiments

212 AMPA Open Receptors Curves All open AMPA recep-tors curves showed a rapid climb to a single peak followedby a long tail slowly descending towards 0 (as shown inFigure 2(b)) The initial section of the curve (containing therapid ascent peak and descent) contains the most relevantinformation that is the amplitude of the peak and the time ittakes to reach it A general exploration of the simulation datashowed that these two characteristics are highly dependent onthe neurotransmitter concentration and synaptic geometry

As we have already established neurotransmitter con-centration cannot be predicted analytically (Fickrsquos secondlaw cannot be solved analytically for 2 or more dimen-sions) therefore open AMPA receptors curves will not beeither (since they are dependent on neurotransmitter con-centration) Moreover synaptic geometry variations can beextremely diverse making the task of inferring a mathemati-cal equation much harder

Therefore we adopted the exploratory analysis approachconsidering a set of selected general mathematical modelsand testing them against the simulation data We based thisset on a previous analysis presented in [11] Our work differsfrom this study in that in [11] this problem was tackled onlyby means of a single approximate technique which we usehere as a baseline to compare different global optimizationheuristics We also performed a subsequent analysis of theoptimization results taking into consideration the relevant

biological implications of our findings and proposing gener-alized synaptic curvemodels From [11] however we selectedthe best five curves as a starting point

(i) 4-by-4 degree polynomial rational function (9 coeffi-cients)

119910 =11990111199094+ 11990121199093+ 11990131199092+ 1199014119909 + 1199015

1199094 + 11990211199093 + 119902

21199092 + 119902

3119909 + 1199024

(10)

(ii) 8-term Fourier series (18 coefficients)

119910 = 1198860+

8

sum

119894=0

(119886119894cos (119894119909119908) + 119887119894 sin (119894119909119908)) (11)

(iii) 8-term Gauss series (25 coefficients)

119910 = 1198860+

8

sum

119894=1

119886119894119890minus((119909minus119887119894)119888119894)

2

(12)

(iv) 2-term exponential function (4 coefficients)

119910 = 119886119890119887119909+ 119888119890119889119909 (13)

(v) 9th degree polynomial (10 coefficients)

119910 = 1199010+

9

sum

119894=1

119901119894119909119894 (14)

To decide which of these curves best serves our purposewe need to test how well each of them can model theexperimental data we observed during simulations We needtherefore to fit every curve to the data and see how theyperform

Abstract and Applied Analysis 5

22 Algorithms Used in the Comparison This section reviewsthe considered algorithms for the previously described curvesfitting problem including the reference baseline algorithm(Nonlinear Least Squares) used in previous studies [11] andseveral metaheuristics (five Evolutionary Algorithms andtwo local searches) whose performance will be compared inSection 3

221 Baseline Algorithm Nonlinear Least Squares The Non-linear Least Squares (NLLS) algorithm [36 37] is frequentlyused in some forms of nonlinear regression and is availablein multiple mathematical and statistical software such asMATLAB [38] or R [39] Loosely speaking this algorithmtries to approximate the model by a linear one in thefirst place to further refine the parameters by successiveiterations This method was successfully used in a seminalwork [11] and thus has been selected as the baseline algorithmto test the performance of several metaheuristics which willbe reviewed in the following sections

222 Selected Metaheuristics

Genetic Algorithm Genetic Algorithms (GAs) [40] are one ofthe most famous and widely used nature-inspired algorithmsin black-box optimization The generality of its operatorsmakes it possible to use GAs both in the continuous andthe discrete domains For our experiments we have used aReal-Coded GA that uses the BLX-120572 crossover with alpha =05 and the Gaussian mutation [41] Different populationsizes and operator probabilities were tested as described inSection 3

Differential Evolution Differential Evolution (DE) [42] is oneof the most powerful algorithms used in continuous opti-mization Its flexibility and simplicity (it only has three freeparameters to configure) have gained DE a lot of popularityin recent years and it is being used both in its canonical formand in more advanced flavors (as we will see below) in manydifferent complex optimization scenarios For our experi-ments we have considered a DE algorithm with ExponentialCrossover and as for the GA different values have been triedfor its three free parameters population size (NP)119865 and CR

Self-Adaptive Differential EvolutionThis is one of the variantsof the aforementioned canonical DE algorithm The Self-Adaptive Differential Evolution algorithm proposed in [43]does not consider fixed 119865 and CR parameters but insteadallows the algorithm to adjust their values with certainprobability controlled by 120591

119865and 120591CR respectively

Generalized Opposition-Based Differential Evolution This isthe second variant of the DE algorithm used in our studyIn the Generalized Opposition-Based Differential Evolution(GODE) algorithm [44] the Opposition-Based Learningconcept is used which basically means that with someprobability the algorithm tries not only to evaluate thesolutions in the current population but also the oppositeones Apart from the commonDE parameters this algorithmadds a new parameter to control the probability of applyingthe opposite search

Solis and Wets Algorithm The well-known Solis and Wetsalgorithm [45] has also been included in our studyThis directsearchmethod performs a randomized local minimization ofa candidate solution with an adaptive step size It has severalparameters that rule the way the candidate solution is movedand how the step size is adapted All these parameters will besubject to parameter tuning as for the rest of the algorithms

MTS-LS1 Algorithm MTS-LS1 is the first of the three localsearches combined by the MTS algorithm [46] This algo-rithm tries to optimize each dimension of the problemindependently and has been successfully used in the past tosearch large scale global optimization problems In the sameline than the Solis and Wets algorithm several parameterscontrol the movements of the local search and all of themwill be adjusted

Covariance Matrix Adaptation Evolution Strategy TheCovarianceMatrix Adaptation Evolution Strategy (CMA-ES)[47] is an evolution strategy that adapts the full covariancematrix of a normal search Some of its advantages is thatit exhibits the same performance on an objective functionwhether or not a linear transformation has been appliedand its robustness against ill-conditioned and nonseparableproblems [48] To increase the probability of finding theglobal optimum the increasing population size IPOP-CMA-ES algorithm was proposed in [49] This algorithm uses arestart method with a successively increasing population sizestrategy each time the stopping criterion is satisfied Thisalgorithm was ranked first on the ldquoSpecial Session on Real-Parameter Optimizationrdquo held at the CEC 2005 Congressand has since been used with several optimization problemsFor the proposed experiments we have considered both theCMA-ES and the IPOP-CMA-ES with several values for itssigma parameter The active covariance matrix adaptationmechanism which actively decreases variances in especiallyunsuccessful directions to accelerate their convergence hasalso been tested with both algorithms

23 Comparison Metric To assess the accuracy of our curve-fitting solutions we based our study on the commonly usedcoefficient of determination (1198772) This is the proportion ofvariability in a data set that is accounted for by the statisticalmodel It provides a measure of how well future outcomes arelikely to be predicted by the model 1198772 usually has a valuebetween 0 and 1 (sometimes it can be less than 0) where 1indicates an exact fit to the reference data and a value less thanor equal to 0 indicates no fit at all

1198772 is calculated as follows given a reference data set 119883

with 119873 values 119909119894 each of which has an associated predicted

value 1199091015840119894 the total sum of squares (TSS) and the residual sum

of squares (RSS) are defined as

TSS = sum119894

(119909119894minus 119909)2

RSS = sum119894

(119909119894minus 1199091015840

119894)2

(15)

6 Abstract and Applied Analysis

And 1198772 is expressed as

1198772= 1 minus

RSSTSS

(16)

The coefficient of determination however does not takeinto account regression model complexity or number ofobservationsThis is particularly important in our case sincewe want to compare the performance of very different curvemodels for example the 2-term exponential function whichhas 4 coefficients against the 8-term Gauss series which has25 coefficients To incorporate these aspects we adopted theadjusted coefficient of determination (2) [50] which takesinto account the complexity of the regression model and thenumber of observations 2 is calculated as follows

2= 1 minus (

RSSTSS

times119873 minus 1

119873 minus 119898) (17)

where 119873 is the size of the sample set and 119898 is the numberof coefficients of the regression model As in the case of 1198772closer to 1means a better fit We used 2 as the fitness metricfor all our curve-fitting calculations

24 Parameter Tuning All the experimentation reportedin this paper has followed a fractional design based onorthogonal matrices according to the Taguchi method [51]In this method the concept of signal to noise ratio (SN ratio)is introduced for measuring the sensitivity of the qualitycharacteristic being investigated in a controlled manner tothose external influencing factors (noise factors) not undercontrolThe aim of the experiment is to determine the highestpossible SN ratio for the results since a high value of the SNratio implies that the signal is much higher than the randomeffects of the noise factors From the quality point of viewthere are three possible categories of quality characteristics(i) smaller is better (ii) nominal is best and (iii) biggeris better The obtained results fall in the ldquobigger is bettercategoryrdquo since the objective is to maximize the adjustmentof the model to the observed values measured as the 2 Forthis category the SN ratio estimate is defined in (18) where119899 denotes the total number of instances and 119910

1 1199102 119910

119899

the target values (the adjustment of the model in this case)Consider the following

SN = minus10 log(1119899

119899

sum

119905=1

1199102

119905) (18)

This method allows the execution of a limited numberof configurations and still reports significant information onthe best combination of parameter values In particular amaximum of 27 different configurations were tested for eachalgorithm on the whole set of models and a subset of thecurves (the exact number of configurations will depend onthe number of parameters to be tuned) In Section 31 wereport the tested values for each algorithm and highlightthose reported by the Taguchi method as the best ones fromthe SN ratio point of view

25 Statistical Validation Tovalidate the results obtained andthe conclusions derived from them it is very important touse the appropriate statistical tests for this purpose In thispaper we conducted a thorough statistical validation that isdiscussed in this section

First the nonparametric Friedmanrsquos test [52] was used todetermine if significant differences exist among the perfor-mance of the algorithms involved in the experimentation bycomparing their median values If such differences existedthen we continued with the comparison and used somepost-hoc tests In particular we reported the 119901 values fortwo nonparametric post-hoc tests Friedman [52ndash54] andWilcoxon tests [55] Finally as multiple algorithms areinvolved in the comparison adjusted 119901 values should ratherbe considered in order to control the familywise error rate Inour experimentation we considered Holmrsquos procedure [54]which has been successfully used in the past in several studies[12 56]

Apart from the 119901 values reported by the statistical testswe also reported the following values for each algorithm

(i) Overall ranking according to the Friedman test wecomputed the relative ranking of each algorithmaccording to its mean performance for each func-tion and reported the average ranking computedthrough all the functions Given the following meanperformance in a benchmark of three functions foralgorithms 119860 and 119861 119860 = (000119890 + 00 127119890 +

01 354119890minus03) 119861 = (372119890+01 042119890+00 219119890minus07)their relative ranking would be Ranks

119860= (1 2 2)

Ranks119861

= (2 1 1) and thus their correspondingaverage rankings are 119877

119860= 167 and 119877

119861= 133

(ii) nWins this is the number of other algorithms forwhich each algorithm is statistically better minus thenumber of algorithms for which each algorithm isstatistically worse according to the Wilcoxon SignedRank Test in a pair-wise comparison [57]

This meticulous validation procedure allowed us to pro-vide solid conclusions based on the results obtained in theexperimentation

3 Results and Discussion

We have conducted a thorough experimentation to elucidateif the selected metaheuristics can be successfully used inthe curve-fitting problem For this purpose we have takeninto account both glutamate concentration and AMPA openreceptors curves problems and in the case of the latter thefive different curves presented in Section 212

In the first place a parameter tuning of the sevenselected metaheuristics has been carried out whose resultsare presented in Section 31 and then final runs with the bestconfiguration of each algorithm are done whose results arepresented in Sections 32-33 and their significance accordingto the aforementioned statistical validation procedure is dis-cussed Finally we provide a biological interpretation of theresults and discuss the convenience of using one particularcurve model instead of the others

Abstract and Applied Analysis 7

31 Parameter Tuning of the Selected Metaheuristics Asdescribed in Section 24 to conduct our experimentationwe have followed a fractional design based on orthogonalmatrices according to the Taguchi method This methodallows the execution of a limited number of configurations (amaximum of 27 in our case) yet yielding significant resultsFurthermore to avoid any overfitting to the experimentaldata set the parameter tuning of the algorithms was done ona subset of curves (10 randomly chosen curves) prior to usingthe selected values in the overall comparison Table 1 containsfor each algorithm all their relevant parameters highlightingin bold the final selected values among all the tested onesaccording to the measures reported by the Taguchi method

Once the best configuration for each algorithm has beendetermined we ran each of them on both synaptic curvesproblems for all the models and curves and conducting 25repetitions per curve In the case of the selected metaheuris-tics we constrained search space to the following interval[minus500 500] to allow faster convergence On the other handto avoid any bias in the results we ran the baseline algorithmboth constrained to the same interval and unconstrained

32 Glutamate Concentration Curves Problem As we statedbefore in the 500 glutamate concentration curves the onlyparameter that needs to be optimized is the 120592 coefficient ofthe proposed exponential decay function Thus it is a simpleoptimization problem that should be solved with little issuesby most algorithms as it is shown in Table 2 In this table wecan observe that most algorithms are able to fit the curve witha precision of 99 The only three algorithms not reachingthat precision get trapped a few times in low quality localoptima which prevents them to reach the same accuracyThis is especially the case of the NLLS algorithm both in theconstrained and unconstrained versions

Due to the nonexistent differences in the performance ofmost of the algorithms the statistical validation did not revealany significant differences except for the two versions of thebaseline algorithm This first result is a good start point thatsuggests that more advanced metaheuristics can deal withthese problems more effectively than the NLLS algorithmused in previous studies [11]

33 AMPAOpen Receptors Curves Problem To providemoreinsight on the results obtained for the AMPA open receptorscurves problem we have decided to report the results andconduct the statistical validation on each of the curvesapproximation models individually and then to carry outan overall comparison considering all the models togetheras a large validation data set The results show that thereare three algorithms (IPOP-CMA-ES Self-Adaptive DE andDE) which systematically rank among the best algorithmsand in most of the cases are significantly better than thebaseline algorithm NLLS

331 Results with the 4-by-4 Degree Polynomial RationalFunction The results for the 4-by-4 degree polynomial ratio-nal function are reported in Table 5 As can be observedthe average attained precision is very high for most of the

Table 1 Parameter tuning of the selected metaheuristics

Parameter values of GApopSize 100 200 400pcx 01 05 09pm 001 005 01

Parameter values of DEpopSize 25 50 100119865 01 05 09CR 01 05 09

Parameter values of Self-Adaptive DEpopSize 25 50 100120591119865

005 01 02120591CR 005 01 02

Parameter values of GODEpopSize 25 50 100119865 01 05 09CR 01 05 09godeProb 02 04 08

Parameter values of Solis and WetsmaxSuccess 2 5 10maxFailed 1 3 5adjustSuccess 2 4 6adjustFailed 025 05 075delta 06 12 24

Parameter values of MTS-LS1(initial) SR 50 of the search space(min) SR 1119890 minus 14

adjustFailed 2 3 5adjustMin 25 5 10moveLeft 025 05 1moveRight 025 05 1

Parameter values of CMA-ES120590 001 01 1Active CMA no yesCMA-ES variant CMA-ES IPOP-CMA-ES

Table 2Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisonswith the other algorithms on the glutamateconcentration problem

Mean fitness Ranking nWinsSelf-Adaptive DE 990119864 minus 01 416 3DE 990119864 minus 01 416 3GA 990119864 minus 01 416 3Solis and Wets 990119864 minus 01 416 3GODE 990119864 minus 01 416 3IPOP-CMA-ES 990119864 minus 01 416 3MTS-LS1 983119864 minus 01 423 minus4NLLS constrained 953119864 minus 01 791 minus7NLLS unconstrained 953119864 minus 01 791 minus7

algorithms (with the exception of the Solis and Wets algo-rithm that alternates good and poor runs thus obtaining a low

8 Abstract and Applied Analysis

average fitness) especially in the case of the top three algo-rithms (IPOP-CMA-ES Self-Adaptive DE and DE) whichreach a precision as high as 999 The 119901 values reported bythe statistical validation (Table 6 for unadjusted 119901 values andTable 7 for the adjusted ones) reveal significant differencesbetween the best performing algorithm (IPOP-CMA-ES) andall the other algorithms except for Self-Adaptive DE andDE according to the Friedman test (Wilcoxon test doesreveal significant differences) The prevalence of these threealgorithms will be a trend for most of the problems as resultsbe will confirmed in the following sections

332 Results with the 8-Term Fourier Series For the 8-termFourier series results are similar in terms of the dominantalgorithms (see Table 8) In this case the Self-Adaptive DEobtained the highest ranking number of wins and averageprecision followed by IPOP-CMA-ES and the two remainingDE alternatives With this function the attained precisionhas slightly decreased (from 72 of the DE algorithm to94of the Self-AdaptiveDE) but again severalmetaheuristics(all the population-based algorithms which also includes theGA) have outperformed the reference NLLS algorithm (inboth constrained and unconstrained versions) This could bedue to the highest complexity of the problem (18 dimensionscompared to the 9 of the previous model) Also in thisproblem the performance of the two local searches (Solisand Wets and MTS-LS1) importantly degrades and this willalso be a trend for the remainder problems The statisticalvalidation whose results are presented in Tables 9 and10 found significant differences between the Self-AdaptiveDE algorithm and all the other algorithms for both tests(Friedman and Wilcoxon)

333 Results with the 8-Term Gauss Series The results forthis model break the trend started in the previous two casesthat follows in the remaining two functions Table 11 depictsthe results for the 8-term Gauss series As can be seen thebest algorithm is again the IPOP-CMA-ES algorithm whichagain reaches a very high precision of 99 However Self-Adaptive DE and DE perform poorly this time in spite of theproblem not being much bigger (25 parameters against 18)This means that it is probably the nature of the componentsof the Gauss series what makes the problem harder to solvefor DE-based algorithms (it is a summation of squaredexponential terms with two correlated coefficients whoseinterdependenciesmight be better captured by the adaptationof the covariance matrix in IPOP-CMA-ES) Surprisinglythe NLLS approaches obtain very good results with thismodel (93 and 97 precision for both constrained andunconstrained resp) It is also interesting to note that theSolis andWets algorithm was unable to converge to any goodsolution in any run Finally the statistical validation whosedetailed information is given in Tables 12 and 13 reportssignificant differences between the IPOP-CMA-ES algorithmand all the others with both statistical tests

334 Results with the 2-Term Exponential Function In thecase of the 2-term exponential function both the Self-Adaptive DE and the DE algorithms obtain very goodresults the former being the best algorithm in this problem(with a precision of more than 99 and the highest rank-ing and number of wins) As in previous cases (with theexception of the 8-term Gauss series) the NLLS algorithmperforms poorly in comparison with most other techniquesAnother interesting thing on this model is that both localsearches (MTS-LS1 and Solis and Wets) greatly improvedtheir performance compared with other problems In thiscase this improvement should be probably due to the smallproblem size that allows an easier optimization componentby component Finally the statistical validation (Tables 15and 16) reports significant differences between the referencealgorithm (Self-Adaptive DE) and all the other algorithmsexcept for the DE algorithm in the Friedman test

335 Results with the 9th Degree Polynomial The last func-tion used in the experimentation the 9th degree polynomialis an interesting case as we can classify studied algorithmsinto two groups those that converge to (what seems) astrong attractor and those that are unable to find any goodsolution In the first group we have the three aforementionedalgorithms that usually exhibit the best performance IPOP-CMA-ES Self-Adaptive DE and DE plus both NLLS con-figurations Consequently the second group is made up ofthe GA the GODE algorithm and the two local searchesThe algorithms in the first group normally converge withsome unsuccessful runs in the case of NLLS constrainedand IPOP-CMA-ES to the same attractor and thus theiraverage precision and ranking is very similar (Table 17) Smallvariations on the fitness introduce some differences in thenumber of wins although they are not very important Itshould be remarked that in contrast to what happens withthe other four models none of the selected algorithms isable to reach a high precision with this problem (a highestvalue of 84 compared to values in the range of 94 to99) This probably means that a 9th degree polynomial isnot the best curve to fit the experimental data which makessense if we consider the exponential decay processes takingpart in the biological process Section 34 will discuss thisissue in detail Although the differences are very small interms of fitness among the best algorithms the statisticalvalidation is able to find significant differences between theunconstrained configuration of the NLLS algorithm and allthe other algorithms with the Wilcoxon test (see Tables 18and 19 for details) However Friedmanrsquos test is unable to findsignificant differences betweenNLLSunconstrained and Self-Adaptive DE DE and NLLS constrained

336 Overall Comparison In this section we offer an overallcomparison of the algorithms for the five AMPA openreceptors curves problems To make this comparison weconsidered the results on the five problems altogether thuscomparing the 2500 fitness values at the same time Table 20summarizes the results From these data it is clear toconclude that the IPOP-CMA-ES algorithm obtains the best

Abstract and Applied Analysis 9G

luta

mat

e con

cent

ratio

n (m

olmiddotL

minus1)

0005

0004

0003

0002

0001

0000

00 02 04 06 08 10

Time (ms)

Figure 3 Example of curve-fit solution for the glutamate concen-tration problem The continuous line shows a solution obtained byfitting the exponential decaymodel proposed with the Self-AdaptiveDE algorithm The dashed line shows the reference data obtainedfrom biochemical simulations For this example 2 = 0990

results with an average precision of 94 the best overallranking and the highest number of wins being the bestalgorithm in all the pairwise comparisons Following IPOP-CMA-ES two DE variants (Self-Adaptive DE and classic DE)take places two and three in the ranking It is importantto note that three of the considered metaheuristics are ableto outperform the NLLS algorithm which was the base ofthe previous work in [11] It is also relevant to the poorperformance of the two considered local searches whichseem to be inadequate for this type of problems Finallythe same statistical validation as for the individual problemshas been followed and the results reported in Tables 21 and22 confirm the superior performance of the IPOP-CMA-ES algorithm obtaining significant differences with all thealgorithms and both statistical tests

34 Biological Interpretation of the Results To understand thebiological implications of this study we have to consider eachof the two curve-fitting problems independently (glutamateconcentration and AMPA open receptors)

341 Glutamate Concentration Curves In the case of theglutamate concentration curves the results shown in Tables2 3 and 4 illustrate two important aspects Firstly the highfitness values obtained with many optimization techniques(most of them around 99) prove that the glutamate dif-fusion equation proposed in Section 21 (equation (9)) isas expected a very accurate model of this phenomenonThis equation therefore can be efficiently used to interpretand predict the glutamate diffusion process given that theadequate means for calculating the 120592 coefficient are usedSecondly selecting the appropriate curve-fitting technique isalso an important aspect in this case since general purposecommonly used algorithms (such as the NLLS used in [11])

Table 3 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the glutamate concentration problem

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 657119864 minus 01 898119864 minus 03

NLLS constrained 000119864 + 00 147119864 minus 70

NLLS unconstrained 000119864 + 00 147119864 minus 70

Table 4 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the glutamate concentra-tion problem

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 100119864 + 00 539119864 minus 02

NLLS constrained 000119864 + 00radic

118119864 minus 69radic

NLLS unconstrained 000119864 + 00radic

118119864 minus 69radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

are now known to produce suboptimal results Figure 3 showsan example of a glutamate concentration curve solution (Thesynapse configuration used here had an AMPA receptor con-centration of 1967molecules120583m2 a glutamate transporterconcentration of 11560molecules120583m2 a postsynaptic lengthof 229 nm a total apposition length of 305 nm and a 20 nmwide synaptic cleft For more details on these parametersplease refer to [11]) In this example the glutamate initialconcentration was119862

0= 4749times10

minus3molL and its coefficientof diffusion 119863

119860= 033 120583m2ms The curve-fitting solution

produced 120592 = 6577 times 10minus6 resulting in the following

equation

Φ119860 (119905) = 4749 times 10

minus3times 119890minus033times119905(6577times10

minus6) (19)

It is important to remember that this equation is thesolution for one particular synaptic configuration Each oneof the total 500 configurations produced a different glutamateconcentration curve from which a different curve-fittingsolution (different 120592 value) was obtained

342 AMPA Open Receptors Curves The case of the openAMPA receptor curves requires further study Optimizationresults (Tables 5 to 22) show that when using the appropriatemetaheuristic most mathematical equations studied can befitted to the experimental data with excellent numericalresults (all curves can be fitted with more than 90 accuracywith the exception of the 9th degree polynomial) The five

10 Abstract and Applied Analysis

Table 5Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 4-by-4degree polynomial rational function

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 186 8Self-Adaptive DE 999119864 minus 01 208 5DE 999119864 minus 01 209 5GA 935119864 minus 01 486 2MTS-LS1 900119864 minus 01 604 0GODE 918119864 minus 01 604 minus2NLLS constrained 860119864 minus 01 704 minus4Solis and Wets 430119864 minus 01 717 minus6NLLS unconstrained 843119864 minus 01 784 minus8

Table 6 Raw 119901 values (IPOP-CMA-ES is the control algorithm) forthe 4-by-4 degree polynomial rational function

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 214119864 minus 01 203119864 minus 12

DE 190119864 minus 01 325119864 minus 14

GA 000119864 + 00 552119864 minus 84

MTS-LS1 000119864 + 00 631119864 minus 84

GODE 000119864 + 00 540119864 minus 84

NLLS constrained 000119864 + 00 199119864 minus 83

Solis and Wets 000119864 + 00 160119864 minus 86

NLLS unconstrained 000119864 + 00 591119864 minus 84

Table 7 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomialrational function

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 380119864 minus 01 203119864 minus 12

radic

DE 380119864 minus 01 650119864 minus 14radic

GA 000119864 + 00radic

378119864 minus 83radic

MTS-LS1 000119864 + 00radic

378119864 minus 83radic

GODE 000119864 + 00radic

378119864 minus 83radic

NLLS constrained 000119864 + 00radic

596119864 minus 83radic

Solis and Wets 000119864 + 00radic

128119864 minus 85radic

NLLS unconstrained 000119864 + 00radic

378119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

mathematical models studied are compared in Table 23selecting for them the results provided by the best opti-mization technique in each case As explained most curveequations produce very accurate results (only the 9th degreepolynomial falls clearly behind) although the number ofwinsfor equations with similar average fitness may differ due tosmall variations on individual executions As a consequenceof this the question of which model is best suited in this caserequires deeper analysis

From a strictly numerical point of view the 8-term Gaussseries fitted with the IPOP-CMA-ES algorithm produces thebest results so it is the first logical choice In additionwe consider the nature of the biological process modeled

Table 8Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 8-termFourier series

Mean fitness Ranking nWinsSelf-Adaptive DE 944119864 minus 01 126 8IPOP-CMA-ES 919119864 minus 01 191 6GODE 813119864 minus 01 346 4DE 721119864 minus 01 434 2GA 699119864 minus 01 479 0NLLS unconstrained 620119864 minus 01 546 minus2MTS-LS1 416119864 minus 01 729 minus4NLLS constrained 387119864 minus 01 749 minus6Solis and Wets 000119864 + 00 900 minus8

Table 9 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 8-term Fourier series

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueIPOP-CMA-ES 167119864 minus 04 589119864 minus 14

GODE 000119864 + 00 158119864 minus 83

DE 000119864 + 00 109119864 minus 83

GA 000119864 + 00 867119864 minus 82

NLLS unconstrained 000119864 + 00 808119864 minus 83

MTS-LS1 000119864 + 00 643119864 minus 84

NLLS constrained 000119864 + 00 636119864 minus 84

Solis and Wets 000119864 + 00 621119864 minus 84

Table 10 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueIPOP-CMA-ES 167119864 minus 04

radic589119864 minus 14

radic

GODE 000119864 + 00radic

632119864 minus 83radic

DE 000119864 + 00radic

544119864 minus 83radic

GA 000119864 + 00radic

173119864 minus 81radic

NLLS unconstrained 000119864 + 00radic

242119864 minus 82radic

MTS-LS1 000119864 + 00radic

497119864 minus 83radic

NLLS constrained 000119864 + 00radic

497119864 minus 83radic

Solis and Wets 000119864 + 00radic

497119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(AMPA synaptic receptor activation after glutamate release)Although it cannot be modeled analytically (as explained inSection 21) we know that a key parameter of this processis neurotransmitter concentration From Fickrsquos second lawof diffusion we know that this concentration must follow anexponential decay so it is logical to assume that some sortof exponential term has to be present in the AMPA openreceptors curve This favors the 8-term Gauss series and 2-term exponential function as the most biologically feasibleequations since both contain exponential components

To continue our analysis we performed an exploratorygraphical study of the curve solutions provided by thedifferent optimization algorithms Figure 4 shows an exampleof curve-fit for every equation fitted using the corresponding

Abstract and Applied Analysis 11

Table 11 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the8-term Gauss series

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 100 8NLLS unconstrained 973119864 minus 01 223 6NLLS constrained 931119864 minus 01 282 4GA 780119864 minus 01 395 2GODE 524119864 minus 01 500 0Self-Adaptive DE 321119864 minus 01 679 minus3MTS-LS1 323119864 minus 01 685 minus3DE 297119864 minus 01 735 minus6Solis and Wets 000119864 + 00 900 minus8

Table 12 Raw119901 values (IPOP-CMA-ES is the control algorithm) forthe 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12 190119864 minus 83

NLLS constrained 000119864 + 00 133119864 minus 83

GA 000119864 + 00 622119864 minus 84

GODE 000119864 + 00 630119864 minus 84

Self-Adaptive DE 000119864 + 00 628119864 minus 84

MTS-LS1 000119864 + 00 632119864 minus 84

DE 000119864 + 00 629119864 minus 84

Solis and Wets 000119864 + 00 475119864 minus 111

Table 13 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12

radic435119864 minus 83

radic

NLLS constrained 000119864 + 00radic

435119864 minus 83radic

GA 000119864 + 00radic

435119864 minus 83radic

GODE 000119864 + 00radic

435119864 minus 83radic

Self-Adaptive DE 000119864 + 00radic

435119864 minus 83radic

MTS-LS1 000119864 + 00radic

435119864 minus 83radic

DE 000119864 + 00radic

435119864 minus 83radic

Solis and Wets 000119864 + 00radic

380119864 minus 110radic

radicmeans that there are statistical differences with significance level 120572 = 005

best algorithm for each case according to our experiments (Asin the glutamate concentration curve example the synapseconfiguration used here had an AMPA receptor concentra-tion of 1967molecules120583m2 a glutamate transporter con-centration of 11560molecules120583m2 a postsynaptic length of229 nm a total apposition length of 305 nm and a 20 nmwidesynaptic cleft For more details on these parameters pleaserefer to [11]) This study revealed several new findings

(1) The 8-termGauss 4-by-4 degree polynomial rationaland 2-term exponential functions are clearly the bestcandidates for modeling the AMPA open receptorscurve This is consistent with the previously men-tioned numerical results (Table 23)

Table 14 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the2-term exponential function

Mean fitness Ranking nWinsSelf-Adaptive DE 997119864 minus 01 169 8DE 996119864 minus 01 186 6IPOP-CMA-ES 997119864 minus 01 246 4Solis and Wets 732119864 minus 01 484 2MTS-LS1 627119864 minus 01 543 0NLLS unconstrained 620119864 minus 01 573 minus2NLLS constrained 347119864 minus 01 725 minus4GODE 225119864 minus 01 768 minus6GA 165119864 minus 01 806 minus8

Table 15 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 2-term exponential function

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 338119864 minus 01 611119864 minus 10

IPOP-CMA-ES 900119864 minus 06 979119864 minus 54

Solis and Wets 000119864 + 00 631119864 minus 84

MTS-LS1 000119864 + 00 626119864 minus 84

NLLS unconstrained 000119864 + 00 631119864 minus 84

NLLS constrained 000119864 + 00 632119864 minus 84

GODE 000119864 + 00 604119864 minus 84

GA 000119864 + 00 526119864 minus 84

Table 16 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 2-term exponentialfunction

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 338119864 minus 01 611119864 minus 10

radic

IPOP-CMA-ES 180119864 minus 05radic

196119864 minus 53radic

Solis and Wets 000119864 + 00radic

423119864 minus 83radic

MTS-LS1 000119864 + 00radic

423119864 minus 83radic

NLLS unconstrained 000119864 + 00radic

423119864 minus 83radic

NLLS constrained 000119864 + 00radic

423119864 minus 83radic

GODE 000119864 + 00radic

423119864 minus 83radic

GA 000119864 + 00radic

421119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(2) The curve-fitting calculated for the 8-term Gaussseries presents an abnormal perturbation near thecurve peak (see the detail in Figure 4(a)) This seemsto be small enough so the overall fitness is not affectedbut implies that apparently the adjusted curves modelthe synaptic phenomenon in a less realistic way thanother options studied

(3) The 9th degree polynomial produces low quality solu-tions clearly surpassed by the first three equations

(4) The 8-term Fourier series seems to be overfitted tothe training dataThenumerical results are apparentlygood which indicates that the curve points tested

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

Abstract and Applied Analysis 3

kinetics of synaptic receptors follow Markov-chain-likemodels with several possible states and transitionprobabilities depending on environmental factors andchemical reaction rate constants The most relevantstate in the AMPA receptor is the open state It isdirectly related with the electrical synaptic response[31])

These two aspects were plotted as time-series curves foreach of the 500 synaptic configurations resulting in a totalof 1000 curves (500 glutamate concentration curves and 500open AMPA receptor curves) each of them being an averageof 500 simulations of the same synaptic configuration Anexample of these curves can be seen in Figure 2 Thecurves obtained were consistent with previous studies [32ndash35] Each one of these 1000 curves represented the behaviorof a specific synapse morphologically different from theothers Each curve became a separate curve-fitting problemaddressed using the metaheuristics described in Section 22

211 Glutamate Concentration Curves In the case of theglutamate concentration (Figure 2(a)) all curves showedan initial peak (maximum concentration in the moment ofvesicle liberation) followed by what very closely resembleda typical exponential decay To understand this curve wereferred to Fickrsquos second law of diffusion (derived by theGerman physiologist Adolf Fick in 1855) which predicts howdiffusion causes the concentration of a substance to changewith time For two or more dimensions this law states that

120597120601119860 (119903 119905)

120597119905= 119863119860nabla2120601119860 (119903 119905) (1)

where 120601119860(119903 119905) is the concentration of a given substance 119860

(glutamate in our case) at location 119903 and time 119905 and 119863119860is

the coefficient of diffusion of substance 119860 This is also calledthe heat equation and has no analytic solution for our specificproblem due to the synaptic three-dimensional geometry(The heat equation can only be solved analytically for a fewidealized uni-dimensional problems) It is known howeverthat its solution must follow a typical exponential decay Inexponential decay processes the quantity 119873 of a substancedecreases in the following way

119889119873

119889119905= minus

1

120591119873 (2)

where 120591 is the mean lifetime or exponential time constant ofthe process We solve (2) as follows

119889119873

119889119905= minus

1

120591119873

119889119873

119873= minus

1

120591119889119905

(3)

We integer both sides

ln119873 = minus1

120591119905 + 119862 (4)

We now apply exp( ) to both sides

119890ln119873

= 119890minus119905120591+119862

119873 = 119890minus119905120591

119890119862

(5)

Since 119890119862 is constant we rename it to 1198730and we finally

obtain a solution to (2) in the following form

119873(119905) = 1198730119890minus119905120591

(6)

where 1198730is the initial quantity since at 119905 = 0 119890minus119905120591 = 1

forall120591 = 0 We know that Fickrsquos second law of diffusion (1)must follow an exponential decay (2) Therefore its solutionmust be in the form of (6) The neurotransmitter diffusesthrough a certain volume the synaptic cleft As explainedthe neurotransmitter concentration 120601

119860(119903 119905) in every point of

this volume changes following an exponential decay (6) Wepropose a new function Φ

119860(119905) as the average concentration

of neurotransmitter in this volume and we assume that italso follows an exponential decay We define Φ

119860(119905) with the

following equation

Φ119860 (119905) = 1198620119890

minus119905120591 (7)

where 1198620is the initial average concentration at 119905 = 0 Since

the number of neurotransmitter molecules released (vesiclesize) and the volume of the synaptic cleft are known (theyare part of the synaptic configuration simulation parameters)1198620can be easily calculated beforehand Please remember

that Φ119860(119905) is the average concentration of 119860 in a defined

volume opposed to 120601119860(119903 119905) which is the exact concentration

at position 119903 Φ119860(119905) is the value measured in our simulation

experiments (shown eg in Figure 2(a))The value of 120591 in (7) however cannot be analytically

calculated From the previously mentioned link between (1)and (2) we deduce a connection between each side of bothequations concluding that

(i) 120597120601119860(119903 119905)120597119905 relates to 119889119873119889119905 (left side of both equa-

tions)(ii) 119863

119860nabla2120601119860(119903 119905) relates to minus(1120591)119873 (right side of both

equations)

From the second item we deduce that the 120591 constantshould be somehow calculated using119863

119860and some unknown

aspects of the synaptic geometry which are factored insidethe nabla2120601

119860(119903 119905) term that cannot be calculated analytically for

an arbitrary 3D problem From 120591 and 119863119860we derive a new

coefficient 120592 so that1

120591= 119863119860

1

120592 (8)

The coefficient 120592 represents the effects of geometryseparated from the substance diffusion coefficient (119863

119860) For

our problem 120592 cannot be determined analytically and needsto be estimated using numerical andor statistical methodsWe incorporate 120592 in (7) and obtain

Φ119860 (119905) = 1198620119890

minus119863119860119905120592 (9)

4 Abstract and Applied Analysis

00012

00010

00008

00006

00004

00002

00000

0 1 2 3 4 5

Time (ms)

Glu

tam

ate c

once

ntra

tion

(molmiddotL

minus1)

(a) Glutamate concentration curve

0 1 2 3 4 5

Time (ms)

Ope

n A

MPA

rece

ptor

s

100

80

60

40

20

0

minus20

(b) Open AMPA receptors curve

Figure 2 Examples of synaptic curves after a neurotransmitter vesicle release (a) Variation of neurotransmitter (in this case glutamate)concentration through time inside the synaptic cleft (b) Variation in the total number of synaptic receptors (in this case AMPA) in the openstate through time

which is the final expression of our neurotransmitter (glu-tamate) concentration time series model The problem ofestimating the value of 120592 still remains and for this we needto return to our synaptic simulations Using curve-fittingmethods we can find the values of 120592 that most accuratelymodel the results observed during these experiments

212 AMPA Open Receptors Curves All open AMPA recep-tors curves showed a rapid climb to a single peak followedby a long tail slowly descending towards 0 (as shown inFigure 2(b)) The initial section of the curve (containing therapid ascent peak and descent) contains the most relevantinformation that is the amplitude of the peak and the time ittakes to reach it A general exploration of the simulation datashowed that these two characteristics are highly dependent onthe neurotransmitter concentration and synaptic geometry

As we have already established neurotransmitter con-centration cannot be predicted analytically (Fickrsquos secondlaw cannot be solved analytically for 2 or more dimen-sions) therefore open AMPA receptors curves will not beeither (since they are dependent on neurotransmitter con-centration) Moreover synaptic geometry variations can beextremely diverse making the task of inferring a mathemati-cal equation much harder

Therefore we adopted the exploratory analysis approachconsidering a set of selected general mathematical modelsand testing them against the simulation data We based thisset on a previous analysis presented in [11] Our work differsfrom this study in that in [11] this problem was tackled onlyby means of a single approximate technique which we usehere as a baseline to compare different global optimizationheuristics We also performed a subsequent analysis of theoptimization results taking into consideration the relevant

biological implications of our findings and proposing gener-alized synaptic curvemodels From [11] however we selectedthe best five curves as a starting point

(i) 4-by-4 degree polynomial rational function (9 coeffi-cients)

119910 =11990111199094+ 11990121199093+ 11990131199092+ 1199014119909 + 1199015

1199094 + 11990211199093 + 119902

21199092 + 119902

3119909 + 1199024

(10)

(ii) 8-term Fourier series (18 coefficients)

119910 = 1198860+

8

sum

119894=0

(119886119894cos (119894119909119908) + 119887119894 sin (119894119909119908)) (11)

(iii) 8-term Gauss series (25 coefficients)

119910 = 1198860+

8

sum

119894=1

119886119894119890minus((119909minus119887119894)119888119894)

2

(12)

(iv) 2-term exponential function (4 coefficients)

119910 = 119886119890119887119909+ 119888119890119889119909 (13)

(v) 9th degree polynomial (10 coefficients)

119910 = 1199010+

9

sum

119894=1

119901119894119909119894 (14)

To decide which of these curves best serves our purposewe need to test how well each of them can model theexperimental data we observed during simulations We needtherefore to fit every curve to the data and see how theyperform

Abstract and Applied Analysis 5

22 Algorithms Used in the Comparison This section reviewsthe considered algorithms for the previously described curvesfitting problem including the reference baseline algorithm(Nonlinear Least Squares) used in previous studies [11] andseveral metaheuristics (five Evolutionary Algorithms andtwo local searches) whose performance will be compared inSection 3

221 Baseline Algorithm Nonlinear Least Squares The Non-linear Least Squares (NLLS) algorithm [36 37] is frequentlyused in some forms of nonlinear regression and is availablein multiple mathematical and statistical software such asMATLAB [38] or R [39] Loosely speaking this algorithmtries to approximate the model by a linear one in thefirst place to further refine the parameters by successiveiterations This method was successfully used in a seminalwork [11] and thus has been selected as the baseline algorithmto test the performance of several metaheuristics which willbe reviewed in the following sections

222 Selected Metaheuristics

Genetic Algorithm Genetic Algorithms (GAs) [40] are one ofthe most famous and widely used nature-inspired algorithmsin black-box optimization The generality of its operatorsmakes it possible to use GAs both in the continuous andthe discrete domains For our experiments we have used aReal-Coded GA that uses the BLX-120572 crossover with alpha =05 and the Gaussian mutation [41] Different populationsizes and operator probabilities were tested as described inSection 3

Differential Evolution Differential Evolution (DE) [42] is oneof the most powerful algorithms used in continuous opti-mization Its flexibility and simplicity (it only has three freeparameters to configure) have gained DE a lot of popularityin recent years and it is being used both in its canonical formand in more advanced flavors (as we will see below) in manydifferent complex optimization scenarios For our experi-ments we have considered a DE algorithm with ExponentialCrossover and as for the GA different values have been triedfor its three free parameters population size (NP)119865 and CR

Self-Adaptive Differential EvolutionThis is one of the variantsof the aforementioned canonical DE algorithm The Self-Adaptive Differential Evolution algorithm proposed in [43]does not consider fixed 119865 and CR parameters but insteadallows the algorithm to adjust their values with certainprobability controlled by 120591

119865and 120591CR respectively

Generalized Opposition-Based Differential Evolution This isthe second variant of the DE algorithm used in our studyIn the Generalized Opposition-Based Differential Evolution(GODE) algorithm [44] the Opposition-Based Learningconcept is used which basically means that with someprobability the algorithm tries not only to evaluate thesolutions in the current population but also the oppositeones Apart from the commonDE parameters this algorithmadds a new parameter to control the probability of applyingthe opposite search

Solis and Wets Algorithm The well-known Solis and Wetsalgorithm [45] has also been included in our studyThis directsearchmethod performs a randomized local minimization ofa candidate solution with an adaptive step size It has severalparameters that rule the way the candidate solution is movedand how the step size is adapted All these parameters will besubject to parameter tuning as for the rest of the algorithms

MTS-LS1 Algorithm MTS-LS1 is the first of the three localsearches combined by the MTS algorithm [46] This algo-rithm tries to optimize each dimension of the problemindependently and has been successfully used in the past tosearch large scale global optimization problems In the sameline than the Solis and Wets algorithm several parameterscontrol the movements of the local search and all of themwill be adjusted

Covariance Matrix Adaptation Evolution Strategy TheCovarianceMatrix Adaptation Evolution Strategy (CMA-ES)[47] is an evolution strategy that adapts the full covariancematrix of a normal search Some of its advantages is thatit exhibits the same performance on an objective functionwhether or not a linear transformation has been appliedand its robustness against ill-conditioned and nonseparableproblems [48] To increase the probability of finding theglobal optimum the increasing population size IPOP-CMA-ES algorithm was proposed in [49] This algorithm uses arestart method with a successively increasing population sizestrategy each time the stopping criterion is satisfied Thisalgorithm was ranked first on the ldquoSpecial Session on Real-Parameter Optimizationrdquo held at the CEC 2005 Congressand has since been used with several optimization problemsFor the proposed experiments we have considered both theCMA-ES and the IPOP-CMA-ES with several values for itssigma parameter The active covariance matrix adaptationmechanism which actively decreases variances in especiallyunsuccessful directions to accelerate their convergence hasalso been tested with both algorithms

23 Comparison Metric To assess the accuracy of our curve-fitting solutions we based our study on the commonly usedcoefficient of determination (1198772) This is the proportion ofvariability in a data set that is accounted for by the statisticalmodel It provides a measure of how well future outcomes arelikely to be predicted by the model 1198772 usually has a valuebetween 0 and 1 (sometimes it can be less than 0) where 1indicates an exact fit to the reference data and a value less thanor equal to 0 indicates no fit at all

1198772 is calculated as follows given a reference data set 119883

with 119873 values 119909119894 each of which has an associated predicted

value 1199091015840119894 the total sum of squares (TSS) and the residual sum

of squares (RSS) are defined as

TSS = sum119894

(119909119894minus 119909)2

RSS = sum119894

(119909119894minus 1199091015840

119894)2

(15)

6 Abstract and Applied Analysis

And 1198772 is expressed as

1198772= 1 minus

RSSTSS

(16)

The coefficient of determination however does not takeinto account regression model complexity or number ofobservationsThis is particularly important in our case sincewe want to compare the performance of very different curvemodels for example the 2-term exponential function whichhas 4 coefficients against the 8-term Gauss series which has25 coefficients To incorporate these aspects we adopted theadjusted coefficient of determination (2) [50] which takesinto account the complexity of the regression model and thenumber of observations 2 is calculated as follows

2= 1 minus (

RSSTSS

times119873 minus 1

119873 minus 119898) (17)

where 119873 is the size of the sample set and 119898 is the numberof coefficients of the regression model As in the case of 1198772closer to 1means a better fit We used 2 as the fitness metricfor all our curve-fitting calculations

24 Parameter Tuning All the experimentation reportedin this paper has followed a fractional design based onorthogonal matrices according to the Taguchi method [51]In this method the concept of signal to noise ratio (SN ratio)is introduced for measuring the sensitivity of the qualitycharacteristic being investigated in a controlled manner tothose external influencing factors (noise factors) not undercontrolThe aim of the experiment is to determine the highestpossible SN ratio for the results since a high value of the SNratio implies that the signal is much higher than the randomeffects of the noise factors From the quality point of viewthere are three possible categories of quality characteristics(i) smaller is better (ii) nominal is best and (iii) biggeris better The obtained results fall in the ldquobigger is bettercategoryrdquo since the objective is to maximize the adjustmentof the model to the observed values measured as the 2 Forthis category the SN ratio estimate is defined in (18) where119899 denotes the total number of instances and 119910

1 1199102 119910

119899

the target values (the adjustment of the model in this case)Consider the following

SN = minus10 log(1119899

119899

sum

119905=1

1199102

119905) (18)

This method allows the execution of a limited numberof configurations and still reports significant information onthe best combination of parameter values In particular amaximum of 27 different configurations were tested for eachalgorithm on the whole set of models and a subset of thecurves (the exact number of configurations will depend onthe number of parameters to be tuned) In Section 31 wereport the tested values for each algorithm and highlightthose reported by the Taguchi method as the best ones fromthe SN ratio point of view

25 Statistical Validation Tovalidate the results obtained andthe conclusions derived from them it is very important touse the appropriate statistical tests for this purpose In thispaper we conducted a thorough statistical validation that isdiscussed in this section

First the nonparametric Friedmanrsquos test [52] was used todetermine if significant differences exist among the perfor-mance of the algorithms involved in the experimentation bycomparing their median values If such differences existedthen we continued with the comparison and used somepost-hoc tests In particular we reported the 119901 values fortwo nonparametric post-hoc tests Friedman [52ndash54] andWilcoxon tests [55] Finally as multiple algorithms areinvolved in the comparison adjusted 119901 values should ratherbe considered in order to control the familywise error rate Inour experimentation we considered Holmrsquos procedure [54]which has been successfully used in the past in several studies[12 56]

Apart from the 119901 values reported by the statistical testswe also reported the following values for each algorithm

(i) Overall ranking according to the Friedman test wecomputed the relative ranking of each algorithmaccording to its mean performance for each func-tion and reported the average ranking computedthrough all the functions Given the following meanperformance in a benchmark of three functions foralgorithms 119860 and 119861 119860 = (000119890 + 00 127119890 +

01 354119890minus03) 119861 = (372119890+01 042119890+00 219119890minus07)their relative ranking would be Ranks

119860= (1 2 2)

Ranks119861

= (2 1 1) and thus their correspondingaverage rankings are 119877

119860= 167 and 119877

119861= 133

(ii) nWins this is the number of other algorithms forwhich each algorithm is statistically better minus thenumber of algorithms for which each algorithm isstatistically worse according to the Wilcoxon SignedRank Test in a pair-wise comparison [57]

This meticulous validation procedure allowed us to pro-vide solid conclusions based on the results obtained in theexperimentation

3 Results and Discussion

We have conducted a thorough experimentation to elucidateif the selected metaheuristics can be successfully used inthe curve-fitting problem For this purpose we have takeninto account both glutamate concentration and AMPA openreceptors curves problems and in the case of the latter thefive different curves presented in Section 212

In the first place a parameter tuning of the sevenselected metaheuristics has been carried out whose resultsare presented in Section 31 and then final runs with the bestconfiguration of each algorithm are done whose results arepresented in Sections 32-33 and their significance accordingto the aforementioned statistical validation procedure is dis-cussed Finally we provide a biological interpretation of theresults and discuss the convenience of using one particularcurve model instead of the others

Abstract and Applied Analysis 7

31 Parameter Tuning of the Selected Metaheuristics Asdescribed in Section 24 to conduct our experimentationwe have followed a fractional design based on orthogonalmatrices according to the Taguchi method This methodallows the execution of a limited number of configurations (amaximum of 27 in our case) yet yielding significant resultsFurthermore to avoid any overfitting to the experimentaldata set the parameter tuning of the algorithms was done ona subset of curves (10 randomly chosen curves) prior to usingthe selected values in the overall comparison Table 1 containsfor each algorithm all their relevant parameters highlightingin bold the final selected values among all the tested onesaccording to the measures reported by the Taguchi method

Once the best configuration for each algorithm has beendetermined we ran each of them on both synaptic curvesproblems for all the models and curves and conducting 25repetitions per curve In the case of the selected metaheuris-tics we constrained search space to the following interval[minus500 500] to allow faster convergence On the other handto avoid any bias in the results we ran the baseline algorithmboth constrained to the same interval and unconstrained

32 Glutamate Concentration Curves Problem As we statedbefore in the 500 glutamate concentration curves the onlyparameter that needs to be optimized is the 120592 coefficient ofthe proposed exponential decay function Thus it is a simpleoptimization problem that should be solved with little issuesby most algorithms as it is shown in Table 2 In this table wecan observe that most algorithms are able to fit the curve witha precision of 99 The only three algorithms not reachingthat precision get trapped a few times in low quality localoptima which prevents them to reach the same accuracyThis is especially the case of the NLLS algorithm both in theconstrained and unconstrained versions

Due to the nonexistent differences in the performance ofmost of the algorithms the statistical validation did not revealany significant differences except for the two versions of thebaseline algorithm This first result is a good start point thatsuggests that more advanced metaheuristics can deal withthese problems more effectively than the NLLS algorithmused in previous studies [11]

33 AMPAOpen Receptors Curves Problem To providemoreinsight on the results obtained for the AMPA open receptorscurves problem we have decided to report the results andconduct the statistical validation on each of the curvesapproximation models individually and then to carry outan overall comparison considering all the models togetheras a large validation data set The results show that thereare three algorithms (IPOP-CMA-ES Self-Adaptive DE andDE) which systematically rank among the best algorithmsand in most of the cases are significantly better than thebaseline algorithm NLLS

331 Results with the 4-by-4 Degree Polynomial RationalFunction The results for the 4-by-4 degree polynomial ratio-nal function are reported in Table 5 As can be observedthe average attained precision is very high for most of the

Table 1 Parameter tuning of the selected metaheuristics

Parameter values of GApopSize 100 200 400pcx 01 05 09pm 001 005 01

Parameter values of DEpopSize 25 50 100119865 01 05 09CR 01 05 09

Parameter values of Self-Adaptive DEpopSize 25 50 100120591119865

005 01 02120591CR 005 01 02

Parameter values of GODEpopSize 25 50 100119865 01 05 09CR 01 05 09godeProb 02 04 08

Parameter values of Solis and WetsmaxSuccess 2 5 10maxFailed 1 3 5adjustSuccess 2 4 6adjustFailed 025 05 075delta 06 12 24

Parameter values of MTS-LS1(initial) SR 50 of the search space(min) SR 1119890 minus 14

adjustFailed 2 3 5adjustMin 25 5 10moveLeft 025 05 1moveRight 025 05 1

Parameter values of CMA-ES120590 001 01 1Active CMA no yesCMA-ES variant CMA-ES IPOP-CMA-ES

Table 2Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisonswith the other algorithms on the glutamateconcentration problem

Mean fitness Ranking nWinsSelf-Adaptive DE 990119864 minus 01 416 3DE 990119864 minus 01 416 3GA 990119864 minus 01 416 3Solis and Wets 990119864 minus 01 416 3GODE 990119864 minus 01 416 3IPOP-CMA-ES 990119864 minus 01 416 3MTS-LS1 983119864 minus 01 423 minus4NLLS constrained 953119864 minus 01 791 minus7NLLS unconstrained 953119864 minus 01 791 minus7

algorithms (with the exception of the Solis and Wets algo-rithm that alternates good and poor runs thus obtaining a low

8 Abstract and Applied Analysis

average fitness) especially in the case of the top three algo-rithms (IPOP-CMA-ES Self-Adaptive DE and DE) whichreach a precision as high as 999 The 119901 values reported bythe statistical validation (Table 6 for unadjusted 119901 values andTable 7 for the adjusted ones) reveal significant differencesbetween the best performing algorithm (IPOP-CMA-ES) andall the other algorithms except for Self-Adaptive DE andDE according to the Friedman test (Wilcoxon test doesreveal significant differences) The prevalence of these threealgorithms will be a trend for most of the problems as resultsbe will confirmed in the following sections

332 Results with the 8-Term Fourier Series For the 8-termFourier series results are similar in terms of the dominantalgorithms (see Table 8) In this case the Self-Adaptive DEobtained the highest ranking number of wins and averageprecision followed by IPOP-CMA-ES and the two remainingDE alternatives With this function the attained precisionhas slightly decreased (from 72 of the DE algorithm to94of the Self-AdaptiveDE) but again severalmetaheuristics(all the population-based algorithms which also includes theGA) have outperformed the reference NLLS algorithm (inboth constrained and unconstrained versions) This could bedue to the highest complexity of the problem (18 dimensionscompared to the 9 of the previous model) Also in thisproblem the performance of the two local searches (Solisand Wets and MTS-LS1) importantly degrades and this willalso be a trend for the remainder problems The statisticalvalidation whose results are presented in Tables 9 and10 found significant differences between the Self-AdaptiveDE algorithm and all the other algorithms for both tests(Friedman and Wilcoxon)

333 Results with the 8-Term Gauss Series The results forthis model break the trend started in the previous two casesthat follows in the remaining two functions Table 11 depictsthe results for the 8-term Gauss series As can be seen thebest algorithm is again the IPOP-CMA-ES algorithm whichagain reaches a very high precision of 99 However Self-Adaptive DE and DE perform poorly this time in spite of theproblem not being much bigger (25 parameters against 18)This means that it is probably the nature of the componentsof the Gauss series what makes the problem harder to solvefor DE-based algorithms (it is a summation of squaredexponential terms with two correlated coefficients whoseinterdependenciesmight be better captured by the adaptationof the covariance matrix in IPOP-CMA-ES) Surprisinglythe NLLS approaches obtain very good results with thismodel (93 and 97 precision for both constrained andunconstrained resp) It is also interesting to note that theSolis andWets algorithm was unable to converge to any goodsolution in any run Finally the statistical validation whosedetailed information is given in Tables 12 and 13 reportssignificant differences between the IPOP-CMA-ES algorithmand all the others with both statistical tests

334 Results with the 2-Term Exponential Function In thecase of the 2-term exponential function both the Self-Adaptive DE and the DE algorithms obtain very goodresults the former being the best algorithm in this problem(with a precision of more than 99 and the highest rank-ing and number of wins) As in previous cases (with theexception of the 8-term Gauss series) the NLLS algorithmperforms poorly in comparison with most other techniquesAnother interesting thing on this model is that both localsearches (MTS-LS1 and Solis and Wets) greatly improvedtheir performance compared with other problems In thiscase this improvement should be probably due to the smallproblem size that allows an easier optimization componentby component Finally the statistical validation (Tables 15and 16) reports significant differences between the referencealgorithm (Self-Adaptive DE) and all the other algorithmsexcept for the DE algorithm in the Friedman test

335 Results with the 9th Degree Polynomial The last func-tion used in the experimentation the 9th degree polynomialis an interesting case as we can classify studied algorithmsinto two groups those that converge to (what seems) astrong attractor and those that are unable to find any goodsolution In the first group we have the three aforementionedalgorithms that usually exhibit the best performance IPOP-CMA-ES Self-Adaptive DE and DE plus both NLLS con-figurations Consequently the second group is made up ofthe GA the GODE algorithm and the two local searchesThe algorithms in the first group normally converge withsome unsuccessful runs in the case of NLLS constrainedand IPOP-CMA-ES to the same attractor and thus theiraverage precision and ranking is very similar (Table 17) Smallvariations on the fitness introduce some differences in thenumber of wins although they are not very important Itshould be remarked that in contrast to what happens withthe other four models none of the selected algorithms isable to reach a high precision with this problem (a highestvalue of 84 compared to values in the range of 94 to99) This probably means that a 9th degree polynomial isnot the best curve to fit the experimental data which makessense if we consider the exponential decay processes takingpart in the biological process Section 34 will discuss thisissue in detail Although the differences are very small interms of fitness among the best algorithms the statisticalvalidation is able to find significant differences between theunconstrained configuration of the NLLS algorithm and allthe other algorithms with the Wilcoxon test (see Tables 18and 19 for details) However Friedmanrsquos test is unable to findsignificant differences betweenNLLSunconstrained and Self-Adaptive DE DE and NLLS constrained

336 Overall Comparison In this section we offer an overallcomparison of the algorithms for the five AMPA openreceptors curves problems To make this comparison weconsidered the results on the five problems altogether thuscomparing the 2500 fitness values at the same time Table 20summarizes the results From these data it is clear toconclude that the IPOP-CMA-ES algorithm obtains the best

Abstract and Applied Analysis 9G

luta

mat

e con

cent

ratio

n (m

olmiddotL

minus1)

0005

0004

0003

0002

0001

0000

00 02 04 06 08 10

Time (ms)

Figure 3 Example of curve-fit solution for the glutamate concen-tration problem The continuous line shows a solution obtained byfitting the exponential decaymodel proposed with the Self-AdaptiveDE algorithm The dashed line shows the reference data obtainedfrom biochemical simulations For this example 2 = 0990

results with an average precision of 94 the best overallranking and the highest number of wins being the bestalgorithm in all the pairwise comparisons Following IPOP-CMA-ES two DE variants (Self-Adaptive DE and classic DE)take places two and three in the ranking It is importantto note that three of the considered metaheuristics are ableto outperform the NLLS algorithm which was the base ofthe previous work in [11] It is also relevant to the poorperformance of the two considered local searches whichseem to be inadequate for this type of problems Finallythe same statistical validation as for the individual problemshas been followed and the results reported in Tables 21 and22 confirm the superior performance of the IPOP-CMA-ES algorithm obtaining significant differences with all thealgorithms and both statistical tests

34 Biological Interpretation of the Results To understand thebiological implications of this study we have to consider eachof the two curve-fitting problems independently (glutamateconcentration and AMPA open receptors)

341 Glutamate Concentration Curves In the case of theglutamate concentration curves the results shown in Tables2 3 and 4 illustrate two important aspects Firstly the highfitness values obtained with many optimization techniques(most of them around 99) prove that the glutamate dif-fusion equation proposed in Section 21 (equation (9)) isas expected a very accurate model of this phenomenonThis equation therefore can be efficiently used to interpretand predict the glutamate diffusion process given that theadequate means for calculating the 120592 coefficient are usedSecondly selecting the appropriate curve-fitting technique isalso an important aspect in this case since general purposecommonly used algorithms (such as the NLLS used in [11])

Table 3 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the glutamate concentration problem

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 657119864 minus 01 898119864 minus 03

NLLS constrained 000119864 + 00 147119864 minus 70

NLLS unconstrained 000119864 + 00 147119864 minus 70

Table 4 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the glutamate concentra-tion problem

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 100119864 + 00 539119864 minus 02

NLLS constrained 000119864 + 00radic

118119864 minus 69radic

NLLS unconstrained 000119864 + 00radic

118119864 minus 69radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

are now known to produce suboptimal results Figure 3 showsan example of a glutamate concentration curve solution (Thesynapse configuration used here had an AMPA receptor con-centration of 1967molecules120583m2 a glutamate transporterconcentration of 11560molecules120583m2 a postsynaptic lengthof 229 nm a total apposition length of 305 nm and a 20 nmwide synaptic cleft For more details on these parametersplease refer to [11]) In this example the glutamate initialconcentration was119862

0= 4749times10

minus3molL and its coefficientof diffusion 119863

119860= 033 120583m2ms The curve-fitting solution

produced 120592 = 6577 times 10minus6 resulting in the following

equation

Φ119860 (119905) = 4749 times 10

minus3times 119890minus033times119905(6577times10

minus6) (19)

It is important to remember that this equation is thesolution for one particular synaptic configuration Each oneof the total 500 configurations produced a different glutamateconcentration curve from which a different curve-fittingsolution (different 120592 value) was obtained

342 AMPA Open Receptors Curves The case of the openAMPA receptor curves requires further study Optimizationresults (Tables 5 to 22) show that when using the appropriatemetaheuristic most mathematical equations studied can befitted to the experimental data with excellent numericalresults (all curves can be fitted with more than 90 accuracywith the exception of the 9th degree polynomial) The five

10 Abstract and Applied Analysis

Table 5Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 4-by-4degree polynomial rational function

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 186 8Self-Adaptive DE 999119864 minus 01 208 5DE 999119864 minus 01 209 5GA 935119864 minus 01 486 2MTS-LS1 900119864 minus 01 604 0GODE 918119864 minus 01 604 minus2NLLS constrained 860119864 minus 01 704 minus4Solis and Wets 430119864 minus 01 717 minus6NLLS unconstrained 843119864 minus 01 784 minus8

Table 6 Raw 119901 values (IPOP-CMA-ES is the control algorithm) forthe 4-by-4 degree polynomial rational function

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 214119864 minus 01 203119864 minus 12

DE 190119864 minus 01 325119864 minus 14

GA 000119864 + 00 552119864 minus 84

MTS-LS1 000119864 + 00 631119864 minus 84

GODE 000119864 + 00 540119864 minus 84

NLLS constrained 000119864 + 00 199119864 minus 83

Solis and Wets 000119864 + 00 160119864 minus 86

NLLS unconstrained 000119864 + 00 591119864 minus 84

Table 7 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomialrational function

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 380119864 minus 01 203119864 minus 12

radic

DE 380119864 minus 01 650119864 minus 14radic

GA 000119864 + 00radic

378119864 minus 83radic

MTS-LS1 000119864 + 00radic

378119864 minus 83radic

GODE 000119864 + 00radic

378119864 minus 83radic

NLLS constrained 000119864 + 00radic

596119864 minus 83radic

Solis and Wets 000119864 + 00radic

128119864 minus 85radic

NLLS unconstrained 000119864 + 00radic

378119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

mathematical models studied are compared in Table 23selecting for them the results provided by the best opti-mization technique in each case As explained most curveequations produce very accurate results (only the 9th degreepolynomial falls clearly behind) although the number ofwinsfor equations with similar average fitness may differ due tosmall variations on individual executions As a consequenceof this the question of which model is best suited in this caserequires deeper analysis

From a strictly numerical point of view the 8-term Gaussseries fitted with the IPOP-CMA-ES algorithm produces thebest results so it is the first logical choice In additionwe consider the nature of the biological process modeled

Table 8Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 8-termFourier series

Mean fitness Ranking nWinsSelf-Adaptive DE 944119864 minus 01 126 8IPOP-CMA-ES 919119864 minus 01 191 6GODE 813119864 minus 01 346 4DE 721119864 minus 01 434 2GA 699119864 minus 01 479 0NLLS unconstrained 620119864 minus 01 546 minus2MTS-LS1 416119864 minus 01 729 minus4NLLS constrained 387119864 minus 01 749 minus6Solis and Wets 000119864 + 00 900 minus8

Table 9 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 8-term Fourier series

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueIPOP-CMA-ES 167119864 minus 04 589119864 minus 14

GODE 000119864 + 00 158119864 minus 83

DE 000119864 + 00 109119864 minus 83

GA 000119864 + 00 867119864 minus 82

NLLS unconstrained 000119864 + 00 808119864 minus 83

MTS-LS1 000119864 + 00 643119864 minus 84

NLLS constrained 000119864 + 00 636119864 minus 84

Solis and Wets 000119864 + 00 621119864 minus 84

Table 10 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueIPOP-CMA-ES 167119864 minus 04

radic589119864 minus 14

radic

GODE 000119864 + 00radic

632119864 minus 83radic

DE 000119864 + 00radic

544119864 minus 83radic

GA 000119864 + 00radic

173119864 minus 81radic

NLLS unconstrained 000119864 + 00radic

242119864 minus 82radic

MTS-LS1 000119864 + 00radic

497119864 minus 83radic

NLLS constrained 000119864 + 00radic

497119864 minus 83radic

Solis and Wets 000119864 + 00radic

497119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(AMPA synaptic receptor activation after glutamate release)Although it cannot be modeled analytically (as explained inSection 21) we know that a key parameter of this processis neurotransmitter concentration From Fickrsquos second lawof diffusion we know that this concentration must follow anexponential decay so it is logical to assume that some sortof exponential term has to be present in the AMPA openreceptors curve This favors the 8-term Gauss series and 2-term exponential function as the most biologically feasibleequations since both contain exponential components

To continue our analysis we performed an exploratorygraphical study of the curve solutions provided by thedifferent optimization algorithms Figure 4 shows an exampleof curve-fit for every equation fitted using the corresponding

Abstract and Applied Analysis 11

Table 11 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the8-term Gauss series

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 100 8NLLS unconstrained 973119864 minus 01 223 6NLLS constrained 931119864 minus 01 282 4GA 780119864 minus 01 395 2GODE 524119864 minus 01 500 0Self-Adaptive DE 321119864 minus 01 679 minus3MTS-LS1 323119864 minus 01 685 minus3DE 297119864 minus 01 735 minus6Solis and Wets 000119864 + 00 900 minus8

Table 12 Raw119901 values (IPOP-CMA-ES is the control algorithm) forthe 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12 190119864 minus 83

NLLS constrained 000119864 + 00 133119864 minus 83

GA 000119864 + 00 622119864 minus 84

GODE 000119864 + 00 630119864 minus 84

Self-Adaptive DE 000119864 + 00 628119864 minus 84

MTS-LS1 000119864 + 00 632119864 minus 84

DE 000119864 + 00 629119864 minus 84

Solis and Wets 000119864 + 00 475119864 minus 111

Table 13 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12

radic435119864 minus 83

radic

NLLS constrained 000119864 + 00radic

435119864 minus 83radic

GA 000119864 + 00radic

435119864 minus 83radic

GODE 000119864 + 00radic

435119864 minus 83radic

Self-Adaptive DE 000119864 + 00radic

435119864 minus 83radic

MTS-LS1 000119864 + 00radic

435119864 minus 83radic

DE 000119864 + 00radic

435119864 minus 83radic

Solis and Wets 000119864 + 00radic

380119864 minus 110radic

radicmeans that there are statistical differences with significance level 120572 = 005

best algorithm for each case according to our experiments (Asin the glutamate concentration curve example the synapseconfiguration used here had an AMPA receptor concentra-tion of 1967molecules120583m2 a glutamate transporter con-centration of 11560molecules120583m2 a postsynaptic length of229 nm a total apposition length of 305 nm and a 20 nmwidesynaptic cleft For more details on these parameters pleaserefer to [11]) This study revealed several new findings

(1) The 8-termGauss 4-by-4 degree polynomial rationaland 2-term exponential functions are clearly the bestcandidates for modeling the AMPA open receptorscurve This is consistent with the previously men-tioned numerical results (Table 23)

Table 14 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the2-term exponential function

Mean fitness Ranking nWinsSelf-Adaptive DE 997119864 minus 01 169 8DE 996119864 minus 01 186 6IPOP-CMA-ES 997119864 minus 01 246 4Solis and Wets 732119864 minus 01 484 2MTS-LS1 627119864 minus 01 543 0NLLS unconstrained 620119864 minus 01 573 minus2NLLS constrained 347119864 minus 01 725 minus4GODE 225119864 minus 01 768 minus6GA 165119864 minus 01 806 minus8

Table 15 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 2-term exponential function

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 338119864 minus 01 611119864 minus 10

IPOP-CMA-ES 900119864 minus 06 979119864 minus 54

Solis and Wets 000119864 + 00 631119864 minus 84

MTS-LS1 000119864 + 00 626119864 minus 84

NLLS unconstrained 000119864 + 00 631119864 minus 84

NLLS constrained 000119864 + 00 632119864 minus 84

GODE 000119864 + 00 604119864 minus 84

GA 000119864 + 00 526119864 minus 84

Table 16 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 2-term exponentialfunction

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 338119864 minus 01 611119864 minus 10

radic

IPOP-CMA-ES 180119864 minus 05radic

196119864 minus 53radic

Solis and Wets 000119864 + 00radic

423119864 minus 83radic

MTS-LS1 000119864 + 00radic

423119864 minus 83radic

NLLS unconstrained 000119864 + 00radic

423119864 minus 83radic

NLLS constrained 000119864 + 00radic

423119864 minus 83radic

GODE 000119864 + 00radic

423119864 minus 83radic

GA 000119864 + 00radic

421119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(2) The curve-fitting calculated for the 8-term Gaussseries presents an abnormal perturbation near thecurve peak (see the detail in Figure 4(a)) This seemsto be small enough so the overall fitness is not affectedbut implies that apparently the adjusted curves modelthe synaptic phenomenon in a less realistic way thanother options studied

(3) The 9th degree polynomial produces low quality solu-tions clearly surpassed by the first three equations

(4) The 8-term Fourier series seems to be overfitted tothe training dataThenumerical results are apparentlygood which indicates that the curve points tested

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

4 Abstract and Applied Analysis

00012

00010

00008

00006

00004

00002

00000

0 1 2 3 4 5

Time (ms)

Glu

tam

ate c

once

ntra

tion

(molmiddotL

minus1)

(a) Glutamate concentration curve

0 1 2 3 4 5

Time (ms)

Ope

n A

MPA

rece

ptor

s

100

80

60

40

20

0

minus20

(b) Open AMPA receptors curve

Figure 2 Examples of synaptic curves after a neurotransmitter vesicle release (a) Variation of neurotransmitter (in this case glutamate)concentration through time inside the synaptic cleft (b) Variation in the total number of synaptic receptors (in this case AMPA) in the openstate through time

which is the final expression of our neurotransmitter (glu-tamate) concentration time series model The problem ofestimating the value of 120592 still remains and for this we needto return to our synaptic simulations Using curve-fittingmethods we can find the values of 120592 that most accuratelymodel the results observed during these experiments

212 AMPA Open Receptors Curves All open AMPA recep-tors curves showed a rapid climb to a single peak followedby a long tail slowly descending towards 0 (as shown inFigure 2(b)) The initial section of the curve (containing therapid ascent peak and descent) contains the most relevantinformation that is the amplitude of the peak and the time ittakes to reach it A general exploration of the simulation datashowed that these two characteristics are highly dependent onthe neurotransmitter concentration and synaptic geometry

As we have already established neurotransmitter con-centration cannot be predicted analytically (Fickrsquos secondlaw cannot be solved analytically for 2 or more dimen-sions) therefore open AMPA receptors curves will not beeither (since they are dependent on neurotransmitter con-centration) Moreover synaptic geometry variations can beextremely diverse making the task of inferring a mathemati-cal equation much harder

Therefore we adopted the exploratory analysis approachconsidering a set of selected general mathematical modelsand testing them against the simulation data We based thisset on a previous analysis presented in [11] Our work differsfrom this study in that in [11] this problem was tackled onlyby means of a single approximate technique which we usehere as a baseline to compare different global optimizationheuristics We also performed a subsequent analysis of theoptimization results taking into consideration the relevant

biological implications of our findings and proposing gener-alized synaptic curvemodels From [11] however we selectedthe best five curves as a starting point

(i) 4-by-4 degree polynomial rational function (9 coeffi-cients)

119910 =11990111199094+ 11990121199093+ 11990131199092+ 1199014119909 + 1199015

1199094 + 11990211199093 + 119902

21199092 + 119902

3119909 + 1199024

(10)

(ii) 8-term Fourier series (18 coefficients)

119910 = 1198860+

8

sum

119894=0

(119886119894cos (119894119909119908) + 119887119894 sin (119894119909119908)) (11)

(iii) 8-term Gauss series (25 coefficients)

119910 = 1198860+

8

sum

119894=1

119886119894119890minus((119909minus119887119894)119888119894)

2

(12)

(iv) 2-term exponential function (4 coefficients)

119910 = 119886119890119887119909+ 119888119890119889119909 (13)

(v) 9th degree polynomial (10 coefficients)

119910 = 1199010+

9

sum

119894=1

119901119894119909119894 (14)

To decide which of these curves best serves our purposewe need to test how well each of them can model theexperimental data we observed during simulations We needtherefore to fit every curve to the data and see how theyperform

Abstract and Applied Analysis 5

22 Algorithms Used in the Comparison This section reviewsthe considered algorithms for the previously described curvesfitting problem including the reference baseline algorithm(Nonlinear Least Squares) used in previous studies [11] andseveral metaheuristics (five Evolutionary Algorithms andtwo local searches) whose performance will be compared inSection 3

221 Baseline Algorithm Nonlinear Least Squares The Non-linear Least Squares (NLLS) algorithm [36 37] is frequentlyused in some forms of nonlinear regression and is availablein multiple mathematical and statistical software such asMATLAB [38] or R [39] Loosely speaking this algorithmtries to approximate the model by a linear one in thefirst place to further refine the parameters by successiveiterations This method was successfully used in a seminalwork [11] and thus has been selected as the baseline algorithmto test the performance of several metaheuristics which willbe reviewed in the following sections

222 Selected Metaheuristics

Genetic Algorithm Genetic Algorithms (GAs) [40] are one ofthe most famous and widely used nature-inspired algorithmsin black-box optimization The generality of its operatorsmakes it possible to use GAs both in the continuous andthe discrete domains For our experiments we have used aReal-Coded GA that uses the BLX-120572 crossover with alpha =05 and the Gaussian mutation [41] Different populationsizes and operator probabilities were tested as described inSection 3

Differential Evolution Differential Evolution (DE) [42] is oneof the most powerful algorithms used in continuous opti-mization Its flexibility and simplicity (it only has three freeparameters to configure) have gained DE a lot of popularityin recent years and it is being used both in its canonical formand in more advanced flavors (as we will see below) in manydifferent complex optimization scenarios For our experi-ments we have considered a DE algorithm with ExponentialCrossover and as for the GA different values have been triedfor its three free parameters population size (NP)119865 and CR

Self-Adaptive Differential EvolutionThis is one of the variantsof the aforementioned canonical DE algorithm The Self-Adaptive Differential Evolution algorithm proposed in [43]does not consider fixed 119865 and CR parameters but insteadallows the algorithm to adjust their values with certainprobability controlled by 120591

119865and 120591CR respectively

Generalized Opposition-Based Differential Evolution This isthe second variant of the DE algorithm used in our studyIn the Generalized Opposition-Based Differential Evolution(GODE) algorithm [44] the Opposition-Based Learningconcept is used which basically means that with someprobability the algorithm tries not only to evaluate thesolutions in the current population but also the oppositeones Apart from the commonDE parameters this algorithmadds a new parameter to control the probability of applyingthe opposite search

Solis and Wets Algorithm The well-known Solis and Wetsalgorithm [45] has also been included in our studyThis directsearchmethod performs a randomized local minimization ofa candidate solution with an adaptive step size It has severalparameters that rule the way the candidate solution is movedand how the step size is adapted All these parameters will besubject to parameter tuning as for the rest of the algorithms

MTS-LS1 Algorithm MTS-LS1 is the first of the three localsearches combined by the MTS algorithm [46] This algo-rithm tries to optimize each dimension of the problemindependently and has been successfully used in the past tosearch large scale global optimization problems In the sameline than the Solis and Wets algorithm several parameterscontrol the movements of the local search and all of themwill be adjusted

Covariance Matrix Adaptation Evolution Strategy TheCovarianceMatrix Adaptation Evolution Strategy (CMA-ES)[47] is an evolution strategy that adapts the full covariancematrix of a normal search Some of its advantages is thatit exhibits the same performance on an objective functionwhether or not a linear transformation has been appliedand its robustness against ill-conditioned and nonseparableproblems [48] To increase the probability of finding theglobal optimum the increasing population size IPOP-CMA-ES algorithm was proposed in [49] This algorithm uses arestart method with a successively increasing population sizestrategy each time the stopping criterion is satisfied Thisalgorithm was ranked first on the ldquoSpecial Session on Real-Parameter Optimizationrdquo held at the CEC 2005 Congressand has since been used with several optimization problemsFor the proposed experiments we have considered both theCMA-ES and the IPOP-CMA-ES with several values for itssigma parameter The active covariance matrix adaptationmechanism which actively decreases variances in especiallyunsuccessful directions to accelerate their convergence hasalso been tested with both algorithms

23 Comparison Metric To assess the accuracy of our curve-fitting solutions we based our study on the commonly usedcoefficient of determination (1198772) This is the proportion ofvariability in a data set that is accounted for by the statisticalmodel It provides a measure of how well future outcomes arelikely to be predicted by the model 1198772 usually has a valuebetween 0 and 1 (sometimes it can be less than 0) where 1indicates an exact fit to the reference data and a value less thanor equal to 0 indicates no fit at all

1198772 is calculated as follows given a reference data set 119883

with 119873 values 119909119894 each of which has an associated predicted

value 1199091015840119894 the total sum of squares (TSS) and the residual sum

of squares (RSS) are defined as

TSS = sum119894

(119909119894minus 119909)2

RSS = sum119894

(119909119894minus 1199091015840

119894)2

(15)

6 Abstract and Applied Analysis

And 1198772 is expressed as

1198772= 1 minus

RSSTSS

(16)

The coefficient of determination however does not takeinto account regression model complexity or number ofobservationsThis is particularly important in our case sincewe want to compare the performance of very different curvemodels for example the 2-term exponential function whichhas 4 coefficients against the 8-term Gauss series which has25 coefficients To incorporate these aspects we adopted theadjusted coefficient of determination (2) [50] which takesinto account the complexity of the regression model and thenumber of observations 2 is calculated as follows

2= 1 minus (

RSSTSS

times119873 minus 1

119873 minus 119898) (17)

where 119873 is the size of the sample set and 119898 is the numberof coefficients of the regression model As in the case of 1198772closer to 1means a better fit We used 2 as the fitness metricfor all our curve-fitting calculations

24 Parameter Tuning All the experimentation reportedin this paper has followed a fractional design based onorthogonal matrices according to the Taguchi method [51]In this method the concept of signal to noise ratio (SN ratio)is introduced for measuring the sensitivity of the qualitycharacteristic being investigated in a controlled manner tothose external influencing factors (noise factors) not undercontrolThe aim of the experiment is to determine the highestpossible SN ratio for the results since a high value of the SNratio implies that the signal is much higher than the randomeffects of the noise factors From the quality point of viewthere are three possible categories of quality characteristics(i) smaller is better (ii) nominal is best and (iii) biggeris better The obtained results fall in the ldquobigger is bettercategoryrdquo since the objective is to maximize the adjustmentof the model to the observed values measured as the 2 Forthis category the SN ratio estimate is defined in (18) where119899 denotes the total number of instances and 119910

1 1199102 119910

119899

the target values (the adjustment of the model in this case)Consider the following

SN = minus10 log(1119899

119899

sum

119905=1

1199102

119905) (18)

This method allows the execution of a limited numberof configurations and still reports significant information onthe best combination of parameter values In particular amaximum of 27 different configurations were tested for eachalgorithm on the whole set of models and a subset of thecurves (the exact number of configurations will depend onthe number of parameters to be tuned) In Section 31 wereport the tested values for each algorithm and highlightthose reported by the Taguchi method as the best ones fromthe SN ratio point of view

25 Statistical Validation Tovalidate the results obtained andthe conclusions derived from them it is very important touse the appropriate statistical tests for this purpose In thispaper we conducted a thorough statistical validation that isdiscussed in this section

First the nonparametric Friedmanrsquos test [52] was used todetermine if significant differences exist among the perfor-mance of the algorithms involved in the experimentation bycomparing their median values If such differences existedthen we continued with the comparison and used somepost-hoc tests In particular we reported the 119901 values fortwo nonparametric post-hoc tests Friedman [52ndash54] andWilcoxon tests [55] Finally as multiple algorithms areinvolved in the comparison adjusted 119901 values should ratherbe considered in order to control the familywise error rate Inour experimentation we considered Holmrsquos procedure [54]which has been successfully used in the past in several studies[12 56]

Apart from the 119901 values reported by the statistical testswe also reported the following values for each algorithm

(i) Overall ranking according to the Friedman test wecomputed the relative ranking of each algorithmaccording to its mean performance for each func-tion and reported the average ranking computedthrough all the functions Given the following meanperformance in a benchmark of three functions foralgorithms 119860 and 119861 119860 = (000119890 + 00 127119890 +

01 354119890minus03) 119861 = (372119890+01 042119890+00 219119890minus07)their relative ranking would be Ranks

119860= (1 2 2)

Ranks119861

= (2 1 1) and thus their correspondingaverage rankings are 119877

119860= 167 and 119877

119861= 133

(ii) nWins this is the number of other algorithms forwhich each algorithm is statistically better minus thenumber of algorithms for which each algorithm isstatistically worse according to the Wilcoxon SignedRank Test in a pair-wise comparison [57]

This meticulous validation procedure allowed us to pro-vide solid conclusions based on the results obtained in theexperimentation

3 Results and Discussion

We have conducted a thorough experimentation to elucidateif the selected metaheuristics can be successfully used inthe curve-fitting problem For this purpose we have takeninto account both glutamate concentration and AMPA openreceptors curves problems and in the case of the latter thefive different curves presented in Section 212

In the first place a parameter tuning of the sevenselected metaheuristics has been carried out whose resultsare presented in Section 31 and then final runs with the bestconfiguration of each algorithm are done whose results arepresented in Sections 32-33 and their significance accordingto the aforementioned statistical validation procedure is dis-cussed Finally we provide a biological interpretation of theresults and discuss the convenience of using one particularcurve model instead of the others

Abstract and Applied Analysis 7

31 Parameter Tuning of the Selected Metaheuristics Asdescribed in Section 24 to conduct our experimentationwe have followed a fractional design based on orthogonalmatrices according to the Taguchi method This methodallows the execution of a limited number of configurations (amaximum of 27 in our case) yet yielding significant resultsFurthermore to avoid any overfitting to the experimentaldata set the parameter tuning of the algorithms was done ona subset of curves (10 randomly chosen curves) prior to usingthe selected values in the overall comparison Table 1 containsfor each algorithm all their relevant parameters highlightingin bold the final selected values among all the tested onesaccording to the measures reported by the Taguchi method

Once the best configuration for each algorithm has beendetermined we ran each of them on both synaptic curvesproblems for all the models and curves and conducting 25repetitions per curve In the case of the selected metaheuris-tics we constrained search space to the following interval[minus500 500] to allow faster convergence On the other handto avoid any bias in the results we ran the baseline algorithmboth constrained to the same interval and unconstrained

32 Glutamate Concentration Curves Problem As we statedbefore in the 500 glutamate concentration curves the onlyparameter that needs to be optimized is the 120592 coefficient ofthe proposed exponential decay function Thus it is a simpleoptimization problem that should be solved with little issuesby most algorithms as it is shown in Table 2 In this table wecan observe that most algorithms are able to fit the curve witha precision of 99 The only three algorithms not reachingthat precision get trapped a few times in low quality localoptima which prevents them to reach the same accuracyThis is especially the case of the NLLS algorithm both in theconstrained and unconstrained versions

Due to the nonexistent differences in the performance ofmost of the algorithms the statistical validation did not revealany significant differences except for the two versions of thebaseline algorithm This first result is a good start point thatsuggests that more advanced metaheuristics can deal withthese problems more effectively than the NLLS algorithmused in previous studies [11]

33 AMPAOpen Receptors Curves Problem To providemoreinsight on the results obtained for the AMPA open receptorscurves problem we have decided to report the results andconduct the statistical validation on each of the curvesapproximation models individually and then to carry outan overall comparison considering all the models togetheras a large validation data set The results show that thereare three algorithms (IPOP-CMA-ES Self-Adaptive DE andDE) which systematically rank among the best algorithmsand in most of the cases are significantly better than thebaseline algorithm NLLS

331 Results with the 4-by-4 Degree Polynomial RationalFunction The results for the 4-by-4 degree polynomial ratio-nal function are reported in Table 5 As can be observedthe average attained precision is very high for most of the

Table 1 Parameter tuning of the selected metaheuristics

Parameter values of GApopSize 100 200 400pcx 01 05 09pm 001 005 01

Parameter values of DEpopSize 25 50 100119865 01 05 09CR 01 05 09

Parameter values of Self-Adaptive DEpopSize 25 50 100120591119865

005 01 02120591CR 005 01 02

Parameter values of GODEpopSize 25 50 100119865 01 05 09CR 01 05 09godeProb 02 04 08

Parameter values of Solis and WetsmaxSuccess 2 5 10maxFailed 1 3 5adjustSuccess 2 4 6adjustFailed 025 05 075delta 06 12 24

Parameter values of MTS-LS1(initial) SR 50 of the search space(min) SR 1119890 minus 14

adjustFailed 2 3 5adjustMin 25 5 10moveLeft 025 05 1moveRight 025 05 1

Parameter values of CMA-ES120590 001 01 1Active CMA no yesCMA-ES variant CMA-ES IPOP-CMA-ES

Table 2Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisonswith the other algorithms on the glutamateconcentration problem

Mean fitness Ranking nWinsSelf-Adaptive DE 990119864 minus 01 416 3DE 990119864 minus 01 416 3GA 990119864 minus 01 416 3Solis and Wets 990119864 minus 01 416 3GODE 990119864 minus 01 416 3IPOP-CMA-ES 990119864 minus 01 416 3MTS-LS1 983119864 minus 01 423 minus4NLLS constrained 953119864 minus 01 791 minus7NLLS unconstrained 953119864 minus 01 791 minus7

algorithms (with the exception of the Solis and Wets algo-rithm that alternates good and poor runs thus obtaining a low

8 Abstract and Applied Analysis

average fitness) especially in the case of the top three algo-rithms (IPOP-CMA-ES Self-Adaptive DE and DE) whichreach a precision as high as 999 The 119901 values reported bythe statistical validation (Table 6 for unadjusted 119901 values andTable 7 for the adjusted ones) reveal significant differencesbetween the best performing algorithm (IPOP-CMA-ES) andall the other algorithms except for Self-Adaptive DE andDE according to the Friedman test (Wilcoxon test doesreveal significant differences) The prevalence of these threealgorithms will be a trend for most of the problems as resultsbe will confirmed in the following sections

332 Results with the 8-Term Fourier Series For the 8-termFourier series results are similar in terms of the dominantalgorithms (see Table 8) In this case the Self-Adaptive DEobtained the highest ranking number of wins and averageprecision followed by IPOP-CMA-ES and the two remainingDE alternatives With this function the attained precisionhas slightly decreased (from 72 of the DE algorithm to94of the Self-AdaptiveDE) but again severalmetaheuristics(all the population-based algorithms which also includes theGA) have outperformed the reference NLLS algorithm (inboth constrained and unconstrained versions) This could bedue to the highest complexity of the problem (18 dimensionscompared to the 9 of the previous model) Also in thisproblem the performance of the two local searches (Solisand Wets and MTS-LS1) importantly degrades and this willalso be a trend for the remainder problems The statisticalvalidation whose results are presented in Tables 9 and10 found significant differences between the Self-AdaptiveDE algorithm and all the other algorithms for both tests(Friedman and Wilcoxon)

333 Results with the 8-Term Gauss Series The results forthis model break the trend started in the previous two casesthat follows in the remaining two functions Table 11 depictsthe results for the 8-term Gauss series As can be seen thebest algorithm is again the IPOP-CMA-ES algorithm whichagain reaches a very high precision of 99 However Self-Adaptive DE and DE perform poorly this time in spite of theproblem not being much bigger (25 parameters against 18)This means that it is probably the nature of the componentsof the Gauss series what makes the problem harder to solvefor DE-based algorithms (it is a summation of squaredexponential terms with two correlated coefficients whoseinterdependenciesmight be better captured by the adaptationof the covariance matrix in IPOP-CMA-ES) Surprisinglythe NLLS approaches obtain very good results with thismodel (93 and 97 precision for both constrained andunconstrained resp) It is also interesting to note that theSolis andWets algorithm was unable to converge to any goodsolution in any run Finally the statistical validation whosedetailed information is given in Tables 12 and 13 reportssignificant differences between the IPOP-CMA-ES algorithmand all the others with both statistical tests

334 Results with the 2-Term Exponential Function In thecase of the 2-term exponential function both the Self-Adaptive DE and the DE algorithms obtain very goodresults the former being the best algorithm in this problem(with a precision of more than 99 and the highest rank-ing and number of wins) As in previous cases (with theexception of the 8-term Gauss series) the NLLS algorithmperforms poorly in comparison with most other techniquesAnother interesting thing on this model is that both localsearches (MTS-LS1 and Solis and Wets) greatly improvedtheir performance compared with other problems In thiscase this improvement should be probably due to the smallproblem size that allows an easier optimization componentby component Finally the statistical validation (Tables 15and 16) reports significant differences between the referencealgorithm (Self-Adaptive DE) and all the other algorithmsexcept for the DE algorithm in the Friedman test

335 Results with the 9th Degree Polynomial The last func-tion used in the experimentation the 9th degree polynomialis an interesting case as we can classify studied algorithmsinto two groups those that converge to (what seems) astrong attractor and those that are unable to find any goodsolution In the first group we have the three aforementionedalgorithms that usually exhibit the best performance IPOP-CMA-ES Self-Adaptive DE and DE plus both NLLS con-figurations Consequently the second group is made up ofthe GA the GODE algorithm and the two local searchesThe algorithms in the first group normally converge withsome unsuccessful runs in the case of NLLS constrainedand IPOP-CMA-ES to the same attractor and thus theiraverage precision and ranking is very similar (Table 17) Smallvariations on the fitness introduce some differences in thenumber of wins although they are not very important Itshould be remarked that in contrast to what happens withthe other four models none of the selected algorithms isable to reach a high precision with this problem (a highestvalue of 84 compared to values in the range of 94 to99) This probably means that a 9th degree polynomial isnot the best curve to fit the experimental data which makessense if we consider the exponential decay processes takingpart in the biological process Section 34 will discuss thisissue in detail Although the differences are very small interms of fitness among the best algorithms the statisticalvalidation is able to find significant differences between theunconstrained configuration of the NLLS algorithm and allthe other algorithms with the Wilcoxon test (see Tables 18and 19 for details) However Friedmanrsquos test is unable to findsignificant differences betweenNLLSunconstrained and Self-Adaptive DE DE and NLLS constrained

336 Overall Comparison In this section we offer an overallcomparison of the algorithms for the five AMPA openreceptors curves problems To make this comparison weconsidered the results on the five problems altogether thuscomparing the 2500 fitness values at the same time Table 20summarizes the results From these data it is clear toconclude that the IPOP-CMA-ES algorithm obtains the best

Abstract and Applied Analysis 9G

luta

mat

e con

cent

ratio

n (m

olmiddotL

minus1)

0005

0004

0003

0002

0001

0000

00 02 04 06 08 10

Time (ms)

Figure 3 Example of curve-fit solution for the glutamate concen-tration problem The continuous line shows a solution obtained byfitting the exponential decaymodel proposed with the Self-AdaptiveDE algorithm The dashed line shows the reference data obtainedfrom biochemical simulations For this example 2 = 0990

results with an average precision of 94 the best overallranking and the highest number of wins being the bestalgorithm in all the pairwise comparisons Following IPOP-CMA-ES two DE variants (Self-Adaptive DE and classic DE)take places two and three in the ranking It is importantto note that three of the considered metaheuristics are ableto outperform the NLLS algorithm which was the base ofthe previous work in [11] It is also relevant to the poorperformance of the two considered local searches whichseem to be inadequate for this type of problems Finallythe same statistical validation as for the individual problemshas been followed and the results reported in Tables 21 and22 confirm the superior performance of the IPOP-CMA-ES algorithm obtaining significant differences with all thealgorithms and both statistical tests

34 Biological Interpretation of the Results To understand thebiological implications of this study we have to consider eachof the two curve-fitting problems independently (glutamateconcentration and AMPA open receptors)

341 Glutamate Concentration Curves In the case of theglutamate concentration curves the results shown in Tables2 3 and 4 illustrate two important aspects Firstly the highfitness values obtained with many optimization techniques(most of them around 99) prove that the glutamate dif-fusion equation proposed in Section 21 (equation (9)) isas expected a very accurate model of this phenomenonThis equation therefore can be efficiently used to interpretand predict the glutamate diffusion process given that theadequate means for calculating the 120592 coefficient are usedSecondly selecting the appropriate curve-fitting technique isalso an important aspect in this case since general purposecommonly used algorithms (such as the NLLS used in [11])

Table 3 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the glutamate concentration problem

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 657119864 minus 01 898119864 minus 03

NLLS constrained 000119864 + 00 147119864 minus 70

NLLS unconstrained 000119864 + 00 147119864 minus 70

Table 4 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the glutamate concentra-tion problem

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 100119864 + 00 539119864 minus 02

NLLS constrained 000119864 + 00radic

118119864 minus 69radic

NLLS unconstrained 000119864 + 00radic

118119864 minus 69radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

are now known to produce suboptimal results Figure 3 showsan example of a glutamate concentration curve solution (Thesynapse configuration used here had an AMPA receptor con-centration of 1967molecules120583m2 a glutamate transporterconcentration of 11560molecules120583m2 a postsynaptic lengthof 229 nm a total apposition length of 305 nm and a 20 nmwide synaptic cleft For more details on these parametersplease refer to [11]) In this example the glutamate initialconcentration was119862

0= 4749times10

minus3molL and its coefficientof diffusion 119863

119860= 033 120583m2ms The curve-fitting solution

produced 120592 = 6577 times 10minus6 resulting in the following

equation

Φ119860 (119905) = 4749 times 10

minus3times 119890minus033times119905(6577times10

minus6) (19)

It is important to remember that this equation is thesolution for one particular synaptic configuration Each oneof the total 500 configurations produced a different glutamateconcentration curve from which a different curve-fittingsolution (different 120592 value) was obtained

342 AMPA Open Receptors Curves The case of the openAMPA receptor curves requires further study Optimizationresults (Tables 5 to 22) show that when using the appropriatemetaheuristic most mathematical equations studied can befitted to the experimental data with excellent numericalresults (all curves can be fitted with more than 90 accuracywith the exception of the 9th degree polynomial) The five

10 Abstract and Applied Analysis

Table 5Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 4-by-4degree polynomial rational function

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 186 8Self-Adaptive DE 999119864 minus 01 208 5DE 999119864 minus 01 209 5GA 935119864 minus 01 486 2MTS-LS1 900119864 minus 01 604 0GODE 918119864 minus 01 604 minus2NLLS constrained 860119864 minus 01 704 minus4Solis and Wets 430119864 minus 01 717 minus6NLLS unconstrained 843119864 minus 01 784 minus8

Table 6 Raw 119901 values (IPOP-CMA-ES is the control algorithm) forthe 4-by-4 degree polynomial rational function

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 214119864 minus 01 203119864 minus 12

DE 190119864 minus 01 325119864 minus 14

GA 000119864 + 00 552119864 minus 84

MTS-LS1 000119864 + 00 631119864 minus 84

GODE 000119864 + 00 540119864 minus 84

NLLS constrained 000119864 + 00 199119864 minus 83

Solis and Wets 000119864 + 00 160119864 minus 86

NLLS unconstrained 000119864 + 00 591119864 minus 84

Table 7 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomialrational function

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 380119864 minus 01 203119864 minus 12

radic

DE 380119864 minus 01 650119864 minus 14radic

GA 000119864 + 00radic

378119864 minus 83radic

MTS-LS1 000119864 + 00radic

378119864 minus 83radic

GODE 000119864 + 00radic

378119864 minus 83radic

NLLS constrained 000119864 + 00radic

596119864 minus 83radic

Solis and Wets 000119864 + 00radic

128119864 minus 85radic

NLLS unconstrained 000119864 + 00radic

378119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

mathematical models studied are compared in Table 23selecting for them the results provided by the best opti-mization technique in each case As explained most curveequations produce very accurate results (only the 9th degreepolynomial falls clearly behind) although the number ofwinsfor equations with similar average fitness may differ due tosmall variations on individual executions As a consequenceof this the question of which model is best suited in this caserequires deeper analysis

From a strictly numerical point of view the 8-term Gaussseries fitted with the IPOP-CMA-ES algorithm produces thebest results so it is the first logical choice In additionwe consider the nature of the biological process modeled

Table 8Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 8-termFourier series

Mean fitness Ranking nWinsSelf-Adaptive DE 944119864 minus 01 126 8IPOP-CMA-ES 919119864 minus 01 191 6GODE 813119864 minus 01 346 4DE 721119864 minus 01 434 2GA 699119864 minus 01 479 0NLLS unconstrained 620119864 minus 01 546 minus2MTS-LS1 416119864 minus 01 729 minus4NLLS constrained 387119864 minus 01 749 minus6Solis and Wets 000119864 + 00 900 minus8

Table 9 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 8-term Fourier series

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueIPOP-CMA-ES 167119864 minus 04 589119864 minus 14

GODE 000119864 + 00 158119864 minus 83

DE 000119864 + 00 109119864 minus 83

GA 000119864 + 00 867119864 minus 82

NLLS unconstrained 000119864 + 00 808119864 minus 83

MTS-LS1 000119864 + 00 643119864 minus 84

NLLS constrained 000119864 + 00 636119864 minus 84

Solis and Wets 000119864 + 00 621119864 minus 84

Table 10 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueIPOP-CMA-ES 167119864 minus 04

radic589119864 minus 14

radic

GODE 000119864 + 00radic

632119864 minus 83radic

DE 000119864 + 00radic

544119864 minus 83radic

GA 000119864 + 00radic

173119864 minus 81radic

NLLS unconstrained 000119864 + 00radic

242119864 minus 82radic

MTS-LS1 000119864 + 00radic

497119864 minus 83radic

NLLS constrained 000119864 + 00radic

497119864 minus 83radic

Solis and Wets 000119864 + 00radic

497119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(AMPA synaptic receptor activation after glutamate release)Although it cannot be modeled analytically (as explained inSection 21) we know that a key parameter of this processis neurotransmitter concentration From Fickrsquos second lawof diffusion we know that this concentration must follow anexponential decay so it is logical to assume that some sortof exponential term has to be present in the AMPA openreceptors curve This favors the 8-term Gauss series and 2-term exponential function as the most biologically feasibleequations since both contain exponential components

To continue our analysis we performed an exploratorygraphical study of the curve solutions provided by thedifferent optimization algorithms Figure 4 shows an exampleof curve-fit for every equation fitted using the corresponding

Abstract and Applied Analysis 11

Table 11 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the8-term Gauss series

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 100 8NLLS unconstrained 973119864 minus 01 223 6NLLS constrained 931119864 minus 01 282 4GA 780119864 minus 01 395 2GODE 524119864 minus 01 500 0Self-Adaptive DE 321119864 minus 01 679 minus3MTS-LS1 323119864 minus 01 685 minus3DE 297119864 minus 01 735 minus6Solis and Wets 000119864 + 00 900 minus8

Table 12 Raw119901 values (IPOP-CMA-ES is the control algorithm) forthe 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12 190119864 minus 83

NLLS constrained 000119864 + 00 133119864 minus 83

GA 000119864 + 00 622119864 minus 84

GODE 000119864 + 00 630119864 minus 84

Self-Adaptive DE 000119864 + 00 628119864 minus 84

MTS-LS1 000119864 + 00 632119864 minus 84

DE 000119864 + 00 629119864 minus 84

Solis and Wets 000119864 + 00 475119864 minus 111

Table 13 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12

radic435119864 minus 83

radic

NLLS constrained 000119864 + 00radic

435119864 minus 83radic

GA 000119864 + 00radic

435119864 minus 83radic

GODE 000119864 + 00radic

435119864 minus 83radic

Self-Adaptive DE 000119864 + 00radic

435119864 minus 83radic

MTS-LS1 000119864 + 00radic

435119864 minus 83radic

DE 000119864 + 00radic

435119864 minus 83radic

Solis and Wets 000119864 + 00radic

380119864 minus 110radic

radicmeans that there are statistical differences with significance level 120572 = 005

best algorithm for each case according to our experiments (Asin the glutamate concentration curve example the synapseconfiguration used here had an AMPA receptor concentra-tion of 1967molecules120583m2 a glutamate transporter con-centration of 11560molecules120583m2 a postsynaptic length of229 nm a total apposition length of 305 nm and a 20 nmwidesynaptic cleft For more details on these parameters pleaserefer to [11]) This study revealed several new findings

(1) The 8-termGauss 4-by-4 degree polynomial rationaland 2-term exponential functions are clearly the bestcandidates for modeling the AMPA open receptorscurve This is consistent with the previously men-tioned numerical results (Table 23)

Table 14 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the2-term exponential function

Mean fitness Ranking nWinsSelf-Adaptive DE 997119864 minus 01 169 8DE 996119864 minus 01 186 6IPOP-CMA-ES 997119864 minus 01 246 4Solis and Wets 732119864 minus 01 484 2MTS-LS1 627119864 minus 01 543 0NLLS unconstrained 620119864 minus 01 573 minus2NLLS constrained 347119864 minus 01 725 minus4GODE 225119864 minus 01 768 minus6GA 165119864 minus 01 806 minus8

Table 15 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 2-term exponential function

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 338119864 minus 01 611119864 minus 10

IPOP-CMA-ES 900119864 minus 06 979119864 minus 54

Solis and Wets 000119864 + 00 631119864 minus 84

MTS-LS1 000119864 + 00 626119864 minus 84

NLLS unconstrained 000119864 + 00 631119864 minus 84

NLLS constrained 000119864 + 00 632119864 minus 84

GODE 000119864 + 00 604119864 minus 84

GA 000119864 + 00 526119864 minus 84

Table 16 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 2-term exponentialfunction

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 338119864 minus 01 611119864 minus 10

radic

IPOP-CMA-ES 180119864 minus 05radic

196119864 minus 53radic

Solis and Wets 000119864 + 00radic

423119864 minus 83radic

MTS-LS1 000119864 + 00radic

423119864 minus 83radic

NLLS unconstrained 000119864 + 00radic

423119864 minus 83radic

NLLS constrained 000119864 + 00radic

423119864 minus 83radic

GODE 000119864 + 00radic

423119864 minus 83radic

GA 000119864 + 00radic

421119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(2) The curve-fitting calculated for the 8-term Gaussseries presents an abnormal perturbation near thecurve peak (see the detail in Figure 4(a)) This seemsto be small enough so the overall fitness is not affectedbut implies that apparently the adjusted curves modelthe synaptic phenomenon in a less realistic way thanother options studied

(3) The 9th degree polynomial produces low quality solu-tions clearly surpassed by the first three equations

(4) The 8-term Fourier series seems to be overfitted tothe training dataThenumerical results are apparentlygood which indicates that the curve points tested

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

Abstract and Applied Analysis 5

22 Algorithms Used in the Comparison This section reviewsthe considered algorithms for the previously described curvesfitting problem including the reference baseline algorithm(Nonlinear Least Squares) used in previous studies [11] andseveral metaheuristics (five Evolutionary Algorithms andtwo local searches) whose performance will be compared inSection 3

221 Baseline Algorithm Nonlinear Least Squares The Non-linear Least Squares (NLLS) algorithm [36 37] is frequentlyused in some forms of nonlinear regression and is availablein multiple mathematical and statistical software such asMATLAB [38] or R [39] Loosely speaking this algorithmtries to approximate the model by a linear one in thefirst place to further refine the parameters by successiveiterations This method was successfully used in a seminalwork [11] and thus has been selected as the baseline algorithmto test the performance of several metaheuristics which willbe reviewed in the following sections

222 Selected Metaheuristics

Genetic Algorithm Genetic Algorithms (GAs) [40] are one ofthe most famous and widely used nature-inspired algorithmsin black-box optimization The generality of its operatorsmakes it possible to use GAs both in the continuous andthe discrete domains For our experiments we have used aReal-Coded GA that uses the BLX-120572 crossover with alpha =05 and the Gaussian mutation [41] Different populationsizes and operator probabilities were tested as described inSection 3

Differential Evolution Differential Evolution (DE) [42] is oneof the most powerful algorithms used in continuous opti-mization Its flexibility and simplicity (it only has three freeparameters to configure) have gained DE a lot of popularityin recent years and it is being used both in its canonical formand in more advanced flavors (as we will see below) in manydifferent complex optimization scenarios For our experi-ments we have considered a DE algorithm with ExponentialCrossover and as for the GA different values have been triedfor its three free parameters population size (NP)119865 and CR

Self-Adaptive Differential EvolutionThis is one of the variantsof the aforementioned canonical DE algorithm The Self-Adaptive Differential Evolution algorithm proposed in [43]does not consider fixed 119865 and CR parameters but insteadallows the algorithm to adjust their values with certainprobability controlled by 120591

119865and 120591CR respectively

Generalized Opposition-Based Differential Evolution This isthe second variant of the DE algorithm used in our studyIn the Generalized Opposition-Based Differential Evolution(GODE) algorithm [44] the Opposition-Based Learningconcept is used which basically means that with someprobability the algorithm tries not only to evaluate thesolutions in the current population but also the oppositeones Apart from the commonDE parameters this algorithmadds a new parameter to control the probability of applyingthe opposite search

Solis and Wets Algorithm The well-known Solis and Wetsalgorithm [45] has also been included in our studyThis directsearchmethod performs a randomized local minimization ofa candidate solution with an adaptive step size It has severalparameters that rule the way the candidate solution is movedand how the step size is adapted All these parameters will besubject to parameter tuning as for the rest of the algorithms

MTS-LS1 Algorithm MTS-LS1 is the first of the three localsearches combined by the MTS algorithm [46] This algo-rithm tries to optimize each dimension of the problemindependently and has been successfully used in the past tosearch large scale global optimization problems In the sameline than the Solis and Wets algorithm several parameterscontrol the movements of the local search and all of themwill be adjusted

Covariance Matrix Adaptation Evolution Strategy TheCovarianceMatrix Adaptation Evolution Strategy (CMA-ES)[47] is an evolution strategy that adapts the full covariancematrix of a normal search Some of its advantages is thatit exhibits the same performance on an objective functionwhether or not a linear transformation has been appliedand its robustness against ill-conditioned and nonseparableproblems [48] To increase the probability of finding theglobal optimum the increasing population size IPOP-CMA-ES algorithm was proposed in [49] This algorithm uses arestart method with a successively increasing population sizestrategy each time the stopping criterion is satisfied Thisalgorithm was ranked first on the ldquoSpecial Session on Real-Parameter Optimizationrdquo held at the CEC 2005 Congressand has since been used with several optimization problemsFor the proposed experiments we have considered both theCMA-ES and the IPOP-CMA-ES with several values for itssigma parameter The active covariance matrix adaptationmechanism which actively decreases variances in especiallyunsuccessful directions to accelerate their convergence hasalso been tested with both algorithms

23 Comparison Metric To assess the accuracy of our curve-fitting solutions we based our study on the commonly usedcoefficient of determination (1198772) This is the proportion ofvariability in a data set that is accounted for by the statisticalmodel It provides a measure of how well future outcomes arelikely to be predicted by the model 1198772 usually has a valuebetween 0 and 1 (sometimes it can be less than 0) where 1indicates an exact fit to the reference data and a value less thanor equal to 0 indicates no fit at all

1198772 is calculated as follows given a reference data set 119883

with 119873 values 119909119894 each of which has an associated predicted

value 1199091015840119894 the total sum of squares (TSS) and the residual sum

of squares (RSS) are defined as

TSS = sum119894

(119909119894minus 119909)2

RSS = sum119894

(119909119894minus 1199091015840

119894)2

(15)

6 Abstract and Applied Analysis

And 1198772 is expressed as

1198772= 1 minus

RSSTSS

(16)

The coefficient of determination however does not takeinto account regression model complexity or number ofobservationsThis is particularly important in our case sincewe want to compare the performance of very different curvemodels for example the 2-term exponential function whichhas 4 coefficients against the 8-term Gauss series which has25 coefficients To incorporate these aspects we adopted theadjusted coefficient of determination (2) [50] which takesinto account the complexity of the regression model and thenumber of observations 2 is calculated as follows

2= 1 minus (

RSSTSS

times119873 minus 1

119873 minus 119898) (17)

where 119873 is the size of the sample set and 119898 is the numberof coefficients of the regression model As in the case of 1198772closer to 1means a better fit We used 2 as the fitness metricfor all our curve-fitting calculations

24 Parameter Tuning All the experimentation reportedin this paper has followed a fractional design based onorthogonal matrices according to the Taguchi method [51]In this method the concept of signal to noise ratio (SN ratio)is introduced for measuring the sensitivity of the qualitycharacteristic being investigated in a controlled manner tothose external influencing factors (noise factors) not undercontrolThe aim of the experiment is to determine the highestpossible SN ratio for the results since a high value of the SNratio implies that the signal is much higher than the randomeffects of the noise factors From the quality point of viewthere are three possible categories of quality characteristics(i) smaller is better (ii) nominal is best and (iii) biggeris better The obtained results fall in the ldquobigger is bettercategoryrdquo since the objective is to maximize the adjustmentof the model to the observed values measured as the 2 Forthis category the SN ratio estimate is defined in (18) where119899 denotes the total number of instances and 119910

1 1199102 119910

119899

the target values (the adjustment of the model in this case)Consider the following

SN = minus10 log(1119899

119899

sum

119905=1

1199102

119905) (18)

This method allows the execution of a limited numberof configurations and still reports significant information onthe best combination of parameter values In particular amaximum of 27 different configurations were tested for eachalgorithm on the whole set of models and a subset of thecurves (the exact number of configurations will depend onthe number of parameters to be tuned) In Section 31 wereport the tested values for each algorithm and highlightthose reported by the Taguchi method as the best ones fromthe SN ratio point of view

25 Statistical Validation Tovalidate the results obtained andthe conclusions derived from them it is very important touse the appropriate statistical tests for this purpose In thispaper we conducted a thorough statistical validation that isdiscussed in this section

First the nonparametric Friedmanrsquos test [52] was used todetermine if significant differences exist among the perfor-mance of the algorithms involved in the experimentation bycomparing their median values If such differences existedthen we continued with the comparison and used somepost-hoc tests In particular we reported the 119901 values fortwo nonparametric post-hoc tests Friedman [52ndash54] andWilcoxon tests [55] Finally as multiple algorithms areinvolved in the comparison adjusted 119901 values should ratherbe considered in order to control the familywise error rate Inour experimentation we considered Holmrsquos procedure [54]which has been successfully used in the past in several studies[12 56]

Apart from the 119901 values reported by the statistical testswe also reported the following values for each algorithm

(i) Overall ranking according to the Friedman test wecomputed the relative ranking of each algorithmaccording to its mean performance for each func-tion and reported the average ranking computedthrough all the functions Given the following meanperformance in a benchmark of three functions foralgorithms 119860 and 119861 119860 = (000119890 + 00 127119890 +

01 354119890minus03) 119861 = (372119890+01 042119890+00 219119890minus07)their relative ranking would be Ranks

119860= (1 2 2)

Ranks119861

= (2 1 1) and thus their correspondingaverage rankings are 119877

119860= 167 and 119877

119861= 133

(ii) nWins this is the number of other algorithms forwhich each algorithm is statistically better minus thenumber of algorithms for which each algorithm isstatistically worse according to the Wilcoxon SignedRank Test in a pair-wise comparison [57]

This meticulous validation procedure allowed us to pro-vide solid conclusions based on the results obtained in theexperimentation

3 Results and Discussion

We have conducted a thorough experimentation to elucidateif the selected metaheuristics can be successfully used inthe curve-fitting problem For this purpose we have takeninto account both glutamate concentration and AMPA openreceptors curves problems and in the case of the latter thefive different curves presented in Section 212

In the first place a parameter tuning of the sevenselected metaheuristics has been carried out whose resultsare presented in Section 31 and then final runs with the bestconfiguration of each algorithm are done whose results arepresented in Sections 32-33 and their significance accordingto the aforementioned statistical validation procedure is dis-cussed Finally we provide a biological interpretation of theresults and discuss the convenience of using one particularcurve model instead of the others

Abstract and Applied Analysis 7

31 Parameter Tuning of the Selected Metaheuristics Asdescribed in Section 24 to conduct our experimentationwe have followed a fractional design based on orthogonalmatrices according to the Taguchi method This methodallows the execution of a limited number of configurations (amaximum of 27 in our case) yet yielding significant resultsFurthermore to avoid any overfitting to the experimentaldata set the parameter tuning of the algorithms was done ona subset of curves (10 randomly chosen curves) prior to usingthe selected values in the overall comparison Table 1 containsfor each algorithm all their relevant parameters highlightingin bold the final selected values among all the tested onesaccording to the measures reported by the Taguchi method

Once the best configuration for each algorithm has beendetermined we ran each of them on both synaptic curvesproblems for all the models and curves and conducting 25repetitions per curve In the case of the selected metaheuris-tics we constrained search space to the following interval[minus500 500] to allow faster convergence On the other handto avoid any bias in the results we ran the baseline algorithmboth constrained to the same interval and unconstrained

32 Glutamate Concentration Curves Problem As we statedbefore in the 500 glutamate concentration curves the onlyparameter that needs to be optimized is the 120592 coefficient ofthe proposed exponential decay function Thus it is a simpleoptimization problem that should be solved with little issuesby most algorithms as it is shown in Table 2 In this table wecan observe that most algorithms are able to fit the curve witha precision of 99 The only three algorithms not reachingthat precision get trapped a few times in low quality localoptima which prevents them to reach the same accuracyThis is especially the case of the NLLS algorithm both in theconstrained and unconstrained versions

Due to the nonexistent differences in the performance ofmost of the algorithms the statistical validation did not revealany significant differences except for the two versions of thebaseline algorithm This first result is a good start point thatsuggests that more advanced metaheuristics can deal withthese problems more effectively than the NLLS algorithmused in previous studies [11]

33 AMPAOpen Receptors Curves Problem To providemoreinsight on the results obtained for the AMPA open receptorscurves problem we have decided to report the results andconduct the statistical validation on each of the curvesapproximation models individually and then to carry outan overall comparison considering all the models togetheras a large validation data set The results show that thereare three algorithms (IPOP-CMA-ES Self-Adaptive DE andDE) which systematically rank among the best algorithmsand in most of the cases are significantly better than thebaseline algorithm NLLS

331 Results with the 4-by-4 Degree Polynomial RationalFunction The results for the 4-by-4 degree polynomial ratio-nal function are reported in Table 5 As can be observedthe average attained precision is very high for most of the

Table 1 Parameter tuning of the selected metaheuristics

Parameter values of GApopSize 100 200 400pcx 01 05 09pm 001 005 01

Parameter values of DEpopSize 25 50 100119865 01 05 09CR 01 05 09

Parameter values of Self-Adaptive DEpopSize 25 50 100120591119865

005 01 02120591CR 005 01 02

Parameter values of GODEpopSize 25 50 100119865 01 05 09CR 01 05 09godeProb 02 04 08

Parameter values of Solis and WetsmaxSuccess 2 5 10maxFailed 1 3 5adjustSuccess 2 4 6adjustFailed 025 05 075delta 06 12 24

Parameter values of MTS-LS1(initial) SR 50 of the search space(min) SR 1119890 minus 14

adjustFailed 2 3 5adjustMin 25 5 10moveLeft 025 05 1moveRight 025 05 1

Parameter values of CMA-ES120590 001 01 1Active CMA no yesCMA-ES variant CMA-ES IPOP-CMA-ES

Table 2Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisonswith the other algorithms on the glutamateconcentration problem

Mean fitness Ranking nWinsSelf-Adaptive DE 990119864 minus 01 416 3DE 990119864 minus 01 416 3GA 990119864 minus 01 416 3Solis and Wets 990119864 minus 01 416 3GODE 990119864 minus 01 416 3IPOP-CMA-ES 990119864 minus 01 416 3MTS-LS1 983119864 minus 01 423 minus4NLLS constrained 953119864 minus 01 791 minus7NLLS unconstrained 953119864 minus 01 791 minus7

algorithms (with the exception of the Solis and Wets algo-rithm that alternates good and poor runs thus obtaining a low

8 Abstract and Applied Analysis

average fitness) especially in the case of the top three algo-rithms (IPOP-CMA-ES Self-Adaptive DE and DE) whichreach a precision as high as 999 The 119901 values reported bythe statistical validation (Table 6 for unadjusted 119901 values andTable 7 for the adjusted ones) reveal significant differencesbetween the best performing algorithm (IPOP-CMA-ES) andall the other algorithms except for Self-Adaptive DE andDE according to the Friedman test (Wilcoxon test doesreveal significant differences) The prevalence of these threealgorithms will be a trend for most of the problems as resultsbe will confirmed in the following sections

332 Results with the 8-Term Fourier Series For the 8-termFourier series results are similar in terms of the dominantalgorithms (see Table 8) In this case the Self-Adaptive DEobtained the highest ranking number of wins and averageprecision followed by IPOP-CMA-ES and the two remainingDE alternatives With this function the attained precisionhas slightly decreased (from 72 of the DE algorithm to94of the Self-AdaptiveDE) but again severalmetaheuristics(all the population-based algorithms which also includes theGA) have outperformed the reference NLLS algorithm (inboth constrained and unconstrained versions) This could bedue to the highest complexity of the problem (18 dimensionscompared to the 9 of the previous model) Also in thisproblem the performance of the two local searches (Solisand Wets and MTS-LS1) importantly degrades and this willalso be a trend for the remainder problems The statisticalvalidation whose results are presented in Tables 9 and10 found significant differences between the Self-AdaptiveDE algorithm and all the other algorithms for both tests(Friedman and Wilcoxon)

333 Results with the 8-Term Gauss Series The results forthis model break the trend started in the previous two casesthat follows in the remaining two functions Table 11 depictsthe results for the 8-term Gauss series As can be seen thebest algorithm is again the IPOP-CMA-ES algorithm whichagain reaches a very high precision of 99 However Self-Adaptive DE and DE perform poorly this time in spite of theproblem not being much bigger (25 parameters against 18)This means that it is probably the nature of the componentsof the Gauss series what makes the problem harder to solvefor DE-based algorithms (it is a summation of squaredexponential terms with two correlated coefficients whoseinterdependenciesmight be better captured by the adaptationof the covariance matrix in IPOP-CMA-ES) Surprisinglythe NLLS approaches obtain very good results with thismodel (93 and 97 precision for both constrained andunconstrained resp) It is also interesting to note that theSolis andWets algorithm was unable to converge to any goodsolution in any run Finally the statistical validation whosedetailed information is given in Tables 12 and 13 reportssignificant differences between the IPOP-CMA-ES algorithmand all the others with both statistical tests

334 Results with the 2-Term Exponential Function In thecase of the 2-term exponential function both the Self-Adaptive DE and the DE algorithms obtain very goodresults the former being the best algorithm in this problem(with a precision of more than 99 and the highest rank-ing and number of wins) As in previous cases (with theexception of the 8-term Gauss series) the NLLS algorithmperforms poorly in comparison with most other techniquesAnother interesting thing on this model is that both localsearches (MTS-LS1 and Solis and Wets) greatly improvedtheir performance compared with other problems In thiscase this improvement should be probably due to the smallproblem size that allows an easier optimization componentby component Finally the statistical validation (Tables 15and 16) reports significant differences between the referencealgorithm (Self-Adaptive DE) and all the other algorithmsexcept for the DE algorithm in the Friedman test

335 Results with the 9th Degree Polynomial The last func-tion used in the experimentation the 9th degree polynomialis an interesting case as we can classify studied algorithmsinto two groups those that converge to (what seems) astrong attractor and those that are unable to find any goodsolution In the first group we have the three aforementionedalgorithms that usually exhibit the best performance IPOP-CMA-ES Self-Adaptive DE and DE plus both NLLS con-figurations Consequently the second group is made up ofthe GA the GODE algorithm and the two local searchesThe algorithms in the first group normally converge withsome unsuccessful runs in the case of NLLS constrainedand IPOP-CMA-ES to the same attractor and thus theiraverage precision and ranking is very similar (Table 17) Smallvariations on the fitness introduce some differences in thenumber of wins although they are not very important Itshould be remarked that in contrast to what happens withthe other four models none of the selected algorithms isable to reach a high precision with this problem (a highestvalue of 84 compared to values in the range of 94 to99) This probably means that a 9th degree polynomial isnot the best curve to fit the experimental data which makessense if we consider the exponential decay processes takingpart in the biological process Section 34 will discuss thisissue in detail Although the differences are very small interms of fitness among the best algorithms the statisticalvalidation is able to find significant differences between theunconstrained configuration of the NLLS algorithm and allthe other algorithms with the Wilcoxon test (see Tables 18and 19 for details) However Friedmanrsquos test is unable to findsignificant differences betweenNLLSunconstrained and Self-Adaptive DE DE and NLLS constrained

336 Overall Comparison In this section we offer an overallcomparison of the algorithms for the five AMPA openreceptors curves problems To make this comparison weconsidered the results on the five problems altogether thuscomparing the 2500 fitness values at the same time Table 20summarizes the results From these data it is clear toconclude that the IPOP-CMA-ES algorithm obtains the best

Abstract and Applied Analysis 9G

luta

mat

e con

cent

ratio

n (m

olmiddotL

minus1)

0005

0004

0003

0002

0001

0000

00 02 04 06 08 10

Time (ms)

Figure 3 Example of curve-fit solution for the glutamate concen-tration problem The continuous line shows a solution obtained byfitting the exponential decaymodel proposed with the Self-AdaptiveDE algorithm The dashed line shows the reference data obtainedfrom biochemical simulations For this example 2 = 0990

results with an average precision of 94 the best overallranking and the highest number of wins being the bestalgorithm in all the pairwise comparisons Following IPOP-CMA-ES two DE variants (Self-Adaptive DE and classic DE)take places two and three in the ranking It is importantto note that three of the considered metaheuristics are ableto outperform the NLLS algorithm which was the base ofthe previous work in [11] It is also relevant to the poorperformance of the two considered local searches whichseem to be inadequate for this type of problems Finallythe same statistical validation as for the individual problemshas been followed and the results reported in Tables 21 and22 confirm the superior performance of the IPOP-CMA-ES algorithm obtaining significant differences with all thealgorithms and both statistical tests

34 Biological Interpretation of the Results To understand thebiological implications of this study we have to consider eachof the two curve-fitting problems independently (glutamateconcentration and AMPA open receptors)

341 Glutamate Concentration Curves In the case of theglutamate concentration curves the results shown in Tables2 3 and 4 illustrate two important aspects Firstly the highfitness values obtained with many optimization techniques(most of them around 99) prove that the glutamate dif-fusion equation proposed in Section 21 (equation (9)) isas expected a very accurate model of this phenomenonThis equation therefore can be efficiently used to interpretand predict the glutamate diffusion process given that theadequate means for calculating the 120592 coefficient are usedSecondly selecting the appropriate curve-fitting technique isalso an important aspect in this case since general purposecommonly used algorithms (such as the NLLS used in [11])

Table 3 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the glutamate concentration problem

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 657119864 minus 01 898119864 minus 03

NLLS constrained 000119864 + 00 147119864 minus 70

NLLS unconstrained 000119864 + 00 147119864 minus 70

Table 4 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the glutamate concentra-tion problem

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 100119864 + 00 539119864 minus 02

NLLS constrained 000119864 + 00radic

118119864 minus 69radic

NLLS unconstrained 000119864 + 00radic

118119864 minus 69radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

are now known to produce suboptimal results Figure 3 showsan example of a glutamate concentration curve solution (Thesynapse configuration used here had an AMPA receptor con-centration of 1967molecules120583m2 a glutamate transporterconcentration of 11560molecules120583m2 a postsynaptic lengthof 229 nm a total apposition length of 305 nm and a 20 nmwide synaptic cleft For more details on these parametersplease refer to [11]) In this example the glutamate initialconcentration was119862

0= 4749times10

minus3molL and its coefficientof diffusion 119863

119860= 033 120583m2ms The curve-fitting solution

produced 120592 = 6577 times 10minus6 resulting in the following

equation

Φ119860 (119905) = 4749 times 10

minus3times 119890minus033times119905(6577times10

minus6) (19)

It is important to remember that this equation is thesolution for one particular synaptic configuration Each oneof the total 500 configurations produced a different glutamateconcentration curve from which a different curve-fittingsolution (different 120592 value) was obtained

342 AMPA Open Receptors Curves The case of the openAMPA receptor curves requires further study Optimizationresults (Tables 5 to 22) show that when using the appropriatemetaheuristic most mathematical equations studied can befitted to the experimental data with excellent numericalresults (all curves can be fitted with more than 90 accuracywith the exception of the 9th degree polynomial) The five

10 Abstract and Applied Analysis

Table 5Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 4-by-4degree polynomial rational function

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 186 8Self-Adaptive DE 999119864 minus 01 208 5DE 999119864 minus 01 209 5GA 935119864 minus 01 486 2MTS-LS1 900119864 minus 01 604 0GODE 918119864 minus 01 604 minus2NLLS constrained 860119864 minus 01 704 minus4Solis and Wets 430119864 minus 01 717 minus6NLLS unconstrained 843119864 minus 01 784 minus8

Table 6 Raw 119901 values (IPOP-CMA-ES is the control algorithm) forthe 4-by-4 degree polynomial rational function

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 214119864 minus 01 203119864 minus 12

DE 190119864 minus 01 325119864 minus 14

GA 000119864 + 00 552119864 minus 84

MTS-LS1 000119864 + 00 631119864 minus 84

GODE 000119864 + 00 540119864 minus 84

NLLS constrained 000119864 + 00 199119864 minus 83

Solis and Wets 000119864 + 00 160119864 minus 86

NLLS unconstrained 000119864 + 00 591119864 minus 84

Table 7 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomialrational function

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 380119864 minus 01 203119864 minus 12

radic

DE 380119864 minus 01 650119864 minus 14radic

GA 000119864 + 00radic

378119864 minus 83radic

MTS-LS1 000119864 + 00radic

378119864 minus 83radic

GODE 000119864 + 00radic

378119864 minus 83radic

NLLS constrained 000119864 + 00radic

596119864 minus 83radic

Solis and Wets 000119864 + 00radic

128119864 minus 85radic

NLLS unconstrained 000119864 + 00radic

378119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

mathematical models studied are compared in Table 23selecting for them the results provided by the best opti-mization technique in each case As explained most curveequations produce very accurate results (only the 9th degreepolynomial falls clearly behind) although the number ofwinsfor equations with similar average fitness may differ due tosmall variations on individual executions As a consequenceof this the question of which model is best suited in this caserequires deeper analysis

From a strictly numerical point of view the 8-term Gaussseries fitted with the IPOP-CMA-ES algorithm produces thebest results so it is the first logical choice In additionwe consider the nature of the biological process modeled

Table 8Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 8-termFourier series

Mean fitness Ranking nWinsSelf-Adaptive DE 944119864 minus 01 126 8IPOP-CMA-ES 919119864 minus 01 191 6GODE 813119864 minus 01 346 4DE 721119864 minus 01 434 2GA 699119864 minus 01 479 0NLLS unconstrained 620119864 minus 01 546 minus2MTS-LS1 416119864 minus 01 729 minus4NLLS constrained 387119864 minus 01 749 minus6Solis and Wets 000119864 + 00 900 minus8

Table 9 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 8-term Fourier series

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueIPOP-CMA-ES 167119864 minus 04 589119864 minus 14

GODE 000119864 + 00 158119864 minus 83

DE 000119864 + 00 109119864 minus 83

GA 000119864 + 00 867119864 minus 82

NLLS unconstrained 000119864 + 00 808119864 minus 83

MTS-LS1 000119864 + 00 643119864 minus 84

NLLS constrained 000119864 + 00 636119864 minus 84

Solis and Wets 000119864 + 00 621119864 minus 84

Table 10 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueIPOP-CMA-ES 167119864 minus 04

radic589119864 minus 14

radic

GODE 000119864 + 00radic

632119864 minus 83radic

DE 000119864 + 00radic

544119864 minus 83radic

GA 000119864 + 00radic

173119864 minus 81radic

NLLS unconstrained 000119864 + 00radic

242119864 minus 82radic

MTS-LS1 000119864 + 00radic

497119864 minus 83radic

NLLS constrained 000119864 + 00radic

497119864 minus 83radic

Solis and Wets 000119864 + 00radic

497119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(AMPA synaptic receptor activation after glutamate release)Although it cannot be modeled analytically (as explained inSection 21) we know that a key parameter of this processis neurotransmitter concentration From Fickrsquos second lawof diffusion we know that this concentration must follow anexponential decay so it is logical to assume that some sortof exponential term has to be present in the AMPA openreceptors curve This favors the 8-term Gauss series and 2-term exponential function as the most biologically feasibleequations since both contain exponential components

To continue our analysis we performed an exploratorygraphical study of the curve solutions provided by thedifferent optimization algorithms Figure 4 shows an exampleof curve-fit for every equation fitted using the corresponding

Abstract and Applied Analysis 11

Table 11 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the8-term Gauss series

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 100 8NLLS unconstrained 973119864 minus 01 223 6NLLS constrained 931119864 minus 01 282 4GA 780119864 minus 01 395 2GODE 524119864 minus 01 500 0Self-Adaptive DE 321119864 minus 01 679 minus3MTS-LS1 323119864 minus 01 685 minus3DE 297119864 minus 01 735 minus6Solis and Wets 000119864 + 00 900 minus8

Table 12 Raw119901 values (IPOP-CMA-ES is the control algorithm) forthe 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12 190119864 minus 83

NLLS constrained 000119864 + 00 133119864 minus 83

GA 000119864 + 00 622119864 minus 84

GODE 000119864 + 00 630119864 minus 84

Self-Adaptive DE 000119864 + 00 628119864 minus 84

MTS-LS1 000119864 + 00 632119864 minus 84

DE 000119864 + 00 629119864 minus 84

Solis and Wets 000119864 + 00 475119864 minus 111

Table 13 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12

radic435119864 minus 83

radic

NLLS constrained 000119864 + 00radic

435119864 minus 83radic

GA 000119864 + 00radic

435119864 minus 83radic

GODE 000119864 + 00radic

435119864 minus 83radic

Self-Adaptive DE 000119864 + 00radic

435119864 minus 83radic

MTS-LS1 000119864 + 00radic

435119864 minus 83radic

DE 000119864 + 00radic

435119864 minus 83radic

Solis and Wets 000119864 + 00radic

380119864 minus 110radic

radicmeans that there are statistical differences with significance level 120572 = 005

best algorithm for each case according to our experiments (Asin the glutamate concentration curve example the synapseconfiguration used here had an AMPA receptor concentra-tion of 1967molecules120583m2 a glutamate transporter con-centration of 11560molecules120583m2 a postsynaptic length of229 nm a total apposition length of 305 nm and a 20 nmwidesynaptic cleft For more details on these parameters pleaserefer to [11]) This study revealed several new findings

(1) The 8-termGauss 4-by-4 degree polynomial rationaland 2-term exponential functions are clearly the bestcandidates for modeling the AMPA open receptorscurve This is consistent with the previously men-tioned numerical results (Table 23)

Table 14 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the2-term exponential function

Mean fitness Ranking nWinsSelf-Adaptive DE 997119864 minus 01 169 8DE 996119864 minus 01 186 6IPOP-CMA-ES 997119864 minus 01 246 4Solis and Wets 732119864 minus 01 484 2MTS-LS1 627119864 minus 01 543 0NLLS unconstrained 620119864 minus 01 573 minus2NLLS constrained 347119864 minus 01 725 minus4GODE 225119864 minus 01 768 minus6GA 165119864 minus 01 806 minus8

Table 15 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 2-term exponential function

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 338119864 minus 01 611119864 minus 10

IPOP-CMA-ES 900119864 minus 06 979119864 minus 54

Solis and Wets 000119864 + 00 631119864 minus 84

MTS-LS1 000119864 + 00 626119864 minus 84

NLLS unconstrained 000119864 + 00 631119864 minus 84

NLLS constrained 000119864 + 00 632119864 minus 84

GODE 000119864 + 00 604119864 minus 84

GA 000119864 + 00 526119864 minus 84

Table 16 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 2-term exponentialfunction

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 338119864 minus 01 611119864 minus 10

radic

IPOP-CMA-ES 180119864 minus 05radic

196119864 minus 53radic

Solis and Wets 000119864 + 00radic

423119864 minus 83radic

MTS-LS1 000119864 + 00radic

423119864 minus 83radic

NLLS unconstrained 000119864 + 00radic

423119864 minus 83radic

NLLS constrained 000119864 + 00radic

423119864 minus 83radic

GODE 000119864 + 00radic

423119864 minus 83radic

GA 000119864 + 00radic

421119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(2) The curve-fitting calculated for the 8-term Gaussseries presents an abnormal perturbation near thecurve peak (see the detail in Figure 4(a)) This seemsto be small enough so the overall fitness is not affectedbut implies that apparently the adjusted curves modelthe synaptic phenomenon in a less realistic way thanother options studied

(3) The 9th degree polynomial produces low quality solu-tions clearly surpassed by the first three equations

(4) The 8-term Fourier series seems to be overfitted tothe training dataThenumerical results are apparentlygood which indicates that the curve points tested

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

6 Abstract and Applied Analysis

And 1198772 is expressed as

1198772= 1 minus

RSSTSS

(16)

The coefficient of determination however does not takeinto account regression model complexity or number ofobservationsThis is particularly important in our case sincewe want to compare the performance of very different curvemodels for example the 2-term exponential function whichhas 4 coefficients against the 8-term Gauss series which has25 coefficients To incorporate these aspects we adopted theadjusted coefficient of determination (2) [50] which takesinto account the complexity of the regression model and thenumber of observations 2 is calculated as follows

2= 1 minus (

RSSTSS

times119873 minus 1

119873 minus 119898) (17)

where 119873 is the size of the sample set and 119898 is the numberof coefficients of the regression model As in the case of 1198772closer to 1means a better fit We used 2 as the fitness metricfor all our curve-fitting calculations

24 Parameter Tuning All the experimentation reportedin this paper has followed a fractional design based onorthogonal matrices according to the Taguchi method [51]In this method the concept of signal to noise ratio (SN ratio)is introduced for measuring the sensitivity of the qualitycharacteristic being investigated in a controlled manner tothose external influencing factors (noise factors) not undercontrolThe aim of the experiment is to determine the highestpossible SN ratio for the results since a high value of the SNratio implies that the signal is much higher than the randomeffects of the noise factors From the quality point of viewthere are three possible categories of quality characteristics(i) smaller is better (ii) nominal is best and (iii) biggeris better The obtained results fall in the ldquobigger is bettercategoryrdquo since the objective is to maximize the adjustmentof the model to the observed values measured as the 2 Forthis category the SN ratio estimate is defined in (18) where119899 denotes the total number of instances and 119910

1 1199102 119910

119899

the target values (the adjustment of the model in this case)Consider the following

SN = minus10 log(1119899

119899

sum

119905=1

1199102

119905) (18)

This method allows the execution of a limited numberof configurations and still reports significant information onthe best combination of parameter values In particular amaximum of 27 different configurations were tested for eachalgorithm on the whole set of models and a subset of thecurves (the exact number of configurations will depend onthe number of parameters to be tuned) In Section 31 wereport the tested values for each algorithm and highlightthose reported by the Taguchi method as the best ones fromthe SN ratio point of view

25 Statistical Validation Tovalidate the results obtained andthe conclusions derived from them it is very important touse the appropriate statistical tests for this purpose In thispaper we conducted a thorough statistical validation that isdiscussed in this section

First the nonparametric Friedmanrsquos test [52] was used todetermine if significant differences exist among the perfor-mance of the algorithms involved in the experimentation bycomparing their median values If such differences existedthen we continued with the comparison and used somepost-hoc tests In particular we reported the 119901 values fortwo nonparametric post-hoc tests Friedman [52ndash54] andWilcoxon tests [55] Finally as multiple algorithms areinvolved in the comparison adjusted 119901 values should ratherbe considered in order to control the familywise error rate Inour experimentation we considered Holmrsquos procedure [54]which has been successfully used in the past in several studies[12 56]

Apart from the 119901 values reported by the statistical testswe also reported the following values for each algorithm

(i) Overall ranking according to the Friedman test wecomputed the relative ranking of each algorithmaccording to its mean performance for each func-tion and reported the average ranking computedthrough all the functions Given the following meanperformance in a benchmark of three functions foralgorithms 119860 and 119861 119860 = (000119890 + 00 127119890 +

01 354119890minus03) 119861 = (372119890+01 042119890+00 219119890minus07)their relative ranking would be Ranks

119860= (1 2 2)

Ranks119861

= (2 1 1) and thus their correspondingaverage rankings are 119877

119860= 167 and 119877

119861= 133

(ii) nWins this is the number of other algorithms forwhich each algorithm is statistically better minus thenumber of algorithms for which each algorithm isstatistically worse according to the Wilcoxon SignedRank Test in a pair-wise comparison [57]

This meticulous validation procedure allowed us to pro-vide solid conclusions based on the results obtained in theexperimentation

3 Results and Discussion

We have conducted a thorough experimentation to elucidateif the selected metaheuristics can be successfully used inthe curve-fitting problem For this purpose we have takeninto account both glutamate concentration and AMPA openreceptors curves problems and in the case of the latter thefive different curves presented in Section 212

In the first place a parameter tuning of the sevenselected metaheuristics has been carried out whose resultsare presented in Section 31 and then final runs with the bestconfiguration of each algorithm are done whose results arepresented in Sections 32-33 and their significance accordingto the aforementioned statistical validation procedure is dis-cussed Finally we provide a biological interpretation of theresults and discuss the convenience of using one particularcurve model instead of the others

Abstract and Applied Analysis 7

31 Parameter Tuning of the Selected Metaheuristics Asdescribed in Section 24 to conduct our experimentationwe have followed a fractional design based on orthogonalmatrices according to the Taguchi method This methodallows the execution of a limited number of configurations (amaximum of 27 in our case) yet yielding significant resultsFurthermore to avoid any overfitting to the experimentaldata set the parameter tuning of the algorithms was done ona subset of curves (10 randomly chosen curves) prior to usingthe selected values in the overall comparison Table 1 containsfor each algorithm all their relevant parameters highlightingin bold the final selected values among all the tested onesaccording to the measures reported by the Taguchi method

Once the best configuration for each algorithm has beendetermined we ran each of them on both synaptic curvesproblems for all the models and curves and conducting 25repetitions per curve In the case of the selected metaheuris-tics we constrained search space to the following interval[minus500 500] to allow faster convergence On the other handto avoid any bias in the results we ran the baseline algorithmboth constrained to the same interval and unconstrained

32 Glutamate Concentration Curves Problem As we statedbefore in the 500 glutamate concentration curves the onlyparameter that needs to be optimized is the 120592 coefficient ofthe proposed exponential decay function Thus it is a simpleoptimization problem that should be solved with little issuesby most algorithms as it is shown in Table 2 In this table wecan observe that most algorithms are able to fit the curve witha precision of 99 The only three algorithms not reachingthat precision get trapped a few times in low quality localoptima which prevents them to reach the same accuracyThis is especially the case of the NLLS algorithm both in theconstrained and unconstrained versions

Due to the nonexistent differences in the performance ofmost of the algorithms the statistical validation did not revealany significant differences except for the two versions of thebaseline algorithm This first result is a good start point thatsuggests that more advanced metaheuristics can deal withthese problems more effectively than the NLLS algorithmused in previous studies [11]

33 AMPAOpen Receptors Curves Problem To providemoreinsight on the results obtained for the AMPA open receptorscurves problem we have decided to report the results andconduct the statistical validation on each of the curvesapproximation models individually and then to carry outan overall comparison considering all the models togetheras a large validation data set The results show that thereare three algorithms (IPOP-CMA-ES Self-Adaptive DE andDE) which systematically rank among the best algorithmsand in most of the cases are significantly better than thebaseline algorithm NLLS

331 Results with the 4-by-4 Degree Polynomial RationalFunction The results for the 4-by-4 degree polynomial ratio-nal function are reported in Table 5 As can be observedthe average attained precision is very high for most of the

Table 1 Parameter tuning of the selected metaheuristics

Parameter values of GApopSize 100 200 400pcx 01 05 09pm 001 005 01

Parameter values of DEpopSize 25 50 100119865 01 05 09CR 01 05 09

Parameter values of Self-Adaptive DEpopSize 25 50 100120591119865

005 01 02120591CR 005 01 02

Parameter values of GODEpopSize 25 50 100119865 01 05 09CR 01 05 09godeProb 02 04 08

Parameter values of Solis and WetsmaxSuccess 2 5 10maxFailed 1 3 5adjustSuccess 2 4 6adjustFailed 025 05 075delta 06 12 24

Parameter values of MTS-LS1(initial) SR 50 of the search space(min) SR 1119890 minus 14

adjustFailed 2 3 5adjustMin 25 5 10moveLeft 025 05 1moveRight 025 05 1

Parameter values of CMA-ES120590 001 01 1Active CMA no yesCMA-ES variant CMA-ES IPOP-CMA-ES

Table 2Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisonswith the other algorithms on the glutamateconcentration problem

Mean fitness Ranking nWinsSelf-Adaptive DE 990119864 minus 01 416 3DE 990119864 minus 01 416 3GA 990119864 minus 01 416 3Solis and Wets 990119864 minus 01 416 3GODE 990119864 minus 01 416 3IPOP-CMA-ES 990119864 minus 01 416 3MTS-LS1 983119864 minus 01 423 minus4NLLS constrained 953119864 minus 01 791 minus7NLLS unconstrained 953119864 minus 01 791 minus7

algorithms (with the exception of the Solis and Wets algo-rithm that alternates good and poor runs thus obtaining a low

8 Abstract and Applied Analysis

average fitness) especially in the case of the top three algo-rithms (IPOP-CMA-ES Self-Adaptive DE and DE) whichreach a precision as high as 999 The 119901 values reported bythe statistical validation (Table 6 for unadjusted 119901 values andTable 7 for the adjusted ones) reveal significant differencesbetween the best performing algorithm (IPOP-CMA-ES) andall the other algorithms except for Self-Adaptive DE andDE according to the Friedman test (Wilcoxon test doesreveal significant differences) The prevalence of these threealgorithms will be a trend for most of the problems as resultsbe will confirmed in the following sections

332 Results with the 8-Term Fourier Series For the 8-termFourier series results are similar in terms of the dominantalgorithms (see Table 8) In this case the Self-Adaptive DEobtained the highest ranking number of wins and averageprecision followed by IPOP-CMA-ES and the two remainingDE alternatives With this function the attained precisionhas slightly decreased (from 72 of the DE algorithm to94of the Self-AdaptiveDE) but again severalmetaheuristics(all the population-based algorithms which also includes theGA) have outperformed the reference NLLS algorithm (inboth constrained and unconstrained versions) This could bedue to the highest complexity of the problem (18 dimensionscompared to the 9 of the previous model) Also in thisproblem the performance of the two local searches (Solisand Wets and MTS-LS1) importantly degrades and this willalso be a trend for the remainder problems The statisticalvalidation whose results are presented in Tables 9 and10 found significant differences between the Self-AdaptiveDE algorithm and all the other algorithms for both tests(Friedman and Wilcoxon)

333 Results with the 8-Term Gauss Series The results forthis model break the trend started in the previous two casesthat follows in the remaining two functions Table 11 depictsthe results for the 8-term Gauss series As can be seen thebest algorithm is again the IPOP-CMA-ES algorithm whichagain reaches a very high precision of 99 However Self-Adaptive DE and DE perform poorly this time in spite of theproblem not being much bigger (25 parameters against 18)This means that it is probably the nature of the componentsof the Gauss series what makes the problem harder to solvefor DE-based algorithms (it is a summation of squaredexponential terms with two correlated coefficients whoseinterdependenciesmight be better captured by the adaptationof the covariance matrix in IPOP-CMA-ES) Surprisinglythe NLLS approaches obtain very good results with thismodel (93 and 97 precision for both constrained andunconstrained resp) It is also interesting to note that theSolis andWets algorithm was unable to converge to any goodsolution in any run Finally the statistical validation whosedetailed information is given in Tables 12 and 13 reportssignificant differences between the IPOP-CMA-ES algorithmand all the others with both statistical tests

334 Results with the 2-Term Exponential Function In thecase of the 2-term exponential function both the Self-Adaptive DE and the DE algorithms obtain very goodresults the former being the best algorithm in this problem(with a precision of more than 99 and the highest rank-ing and number of wins) As in previous cases (with theexception of the 8-term Gauss series) the NLLS algorithmperforms poorly in comparison with most other techniquesAnother interesting thing on this model is that both localsearches (MTS-LS1 and Solis and Wets) greatly improvedtheir performance compared with other problems In thiscase this improvement should be probably due to the smallproblem size that allows an easier optimization componentby component Finally the statistical validation (Tables 15and 16) reports significant differences between the referencealgorithm (Self-Adaptive DE) and all the other algorithmsexcept for the DE algorithm in the Friedman test

335 Results with the 9th Degree Polynomial The last func-tion used in the experimentation the 9th degree polynomialis an interesting case as we can classify studied algorithmsinto two groups those that converge to (what seems) astrong attractor and those that are unable to find any goodsolution In the first group we have the three aforementionedalgorithms that usually exhibit the best performance IPOP-CMA-ES Self-Adaptive DE and DE plus both NLLS con-figurations Consequently the second group is made up ofthe GA the GODE algorithm and the two local searchesThe algorithms in the first group normally converge withsome unsuccessful runs in the case of NLLS constrainedand IPOP-CMA-ES to the same attractor and thus theiraverage precision and ranking is very similar (Table 17) Smallvariations on the fitness introduce some differences in thenumber of wins although they are not very important Itshould be remarked that in contrast to what happens withthe other four models none of the selected algorithms isable to reach a high precision with this problem (a highestvalue of 84 compared to values in the range of 94 to99) This probably means that a 9th degree polynomial isnot the best curve to fit the experimental data which makessense if we consider the exponential decay processes takingpart in the biological process Section 34 will discuss thisissue in detail Although the differences are very small interms of fitness among the best algorithms the statisticalvalidation is able to find significant differences between theunconstrained configuration of the NLLS algorithm and allthe other algorithms with the Wilcoxon test (see Tables 18and 19 for details) However Friedmanrsquos test is unable to findsignificant differences betweenNLLSunconstrained and Self-Adaptive DE DE and NLLS constrained

336 Overall Comparison In this section we offer an overallcomparison of the algorithms for the five AMPA openreceptors curves problems To make this comparison weconsidered the results on the five problems altogether thuscomparing the 2500 fitness values at the same time Table 20summarizes the results From these data it is clear toconclude that the IPOP-CMA-ES algorithm obtains the best

Abstract and Applied Analysis 9G

luta

mat

e con

cent

ratio

n (m

olmiddotL

minus1)

0005

0004

0003

0002

0001

0000

00 02 04 06 08 10

Time (ms)

Figure 3 Example of curve-fit solution for the glutamate concen-tration problem The continuous line shows a solution obtained byfitting the exponential decaymodel proposed with the Self-AdaptiveDE algorithm The dashed line shows the reference data obtainedfrom biochemical simulations For this example 2 = 0990

results with an average precision of 94 the best overallranking and the highest number of wins being the bestalgorithm in all the pairwise comparisons Following IPOP-CMA-ES two DE variants (Self-Adaptive DE and classic DE)take places two and three in the ranking It is importantto note that three of the considered metaheuristics are ableto outperform the NLLS algorithm which was the base ofthe previous work in [11] It is also relevant to the poorperformance of the two considered local searches whichseem to be inadequate for this type of problems Finallythe same statistical validation as for the individual problemshas been followed and the results reported in Tables 21 and22 confirm the superior performance of the IPOP-CMA-ES algorithm obtaining significant differences with all thealgorithms and both statistical tests

34 Biological Interpretation of the Results To understand thebiological implications of this study we have to consider eachof the two curve-fitting problems independently (glutamateconcentration and AMPA open receptors)

341 Glutamate Concentration Curves In the case of theglutamate concentration curves the results shown in Tables2 3 and 4 illustrate two important aspects Firstly the highfitness values obtained with many optimization techniques(most of them around 99) prove that the glutamate dif-fusion equation proposed in Section 21 (equation (9)) isas expected a very accurate model of this phenomenonThis equation therefore can be efficiently used to interpretand predict the glutamate diffusion process given that theadequate means for calculating the 120592 coefficient are usedSecondly selecting the appropriate curve-fitting technique isalso an important aspect in this case since general purposecommonly used algorithms (such as the NLLS used in [11])

Table 3 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the glutamate concentration problem

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 657119864 minus 01 898119864 minus 03

NLLS constrained 000119864 + 00 147119864 minus 70

NLLS unconstrained 000119864 + 00 147119864 minus 70

Table 4 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the glutamate concentra-tion problem

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 100119864 + 00 539119864 minus 02

NLLS constrained 000119864 + 00radic

118119864 minus 69radic

NLLS unconstrained 000119864 + 00radic

118119864 minus 69radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

are now known to produce suboptimal results Figure 3 showsan example of a glutamate concentration curve solution (Thesynapse configuration used here had an AMPA receptor con-centration of 1967molecules120583m2 a glutamate transporterconcentration of 11560molecules120583m2 a postsynaptic lengthof 229 nm a total apposition length of 305 nm and a 20 nmwide synaptic cleft For more details on these parametersplease refer to [11]) In this example the glutamate initialconcentration was119862

0= 4749times10

minus3molL and its coefficientof diffusion 119863

119860= 033 120583m2ms The curve-fitting solution

produced 120592 = 6577 times 10minus6 resulting in the following

equation

Φ119860 (119905) = 4749 times 10

minus3times 119890minus033times119905(6577times10

minus6) (19)

It is important to remember that this equation is thesolution for one particular synaptic configuration Each oneof the total 500 configurations produced a different glutamateconcentration curve from which a different curve-fittingsolution (different 120592 value) was obtained

342 AMPA Open Receptors Curves The case of the openAMPA receptor curves requires further study Optimizationresults (Tables 5 to 22) show that when using the appropriatemetaheuristic most mathematical equations studied can befitted to the experimental data with excellent numericalresults (all curves can be fitted with more than 90 accuracywith the exception of the 9th degree polynomial) The five

10 Abstract and Applied Analysis

Table 5Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 4-by-4degree polynomial rational function

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 186 8Self-Adaptive DE 999119864 minus 01 208 5DE 999119864 minus 01 209 5GA 935119864 minus 01 486 2MTS-LS1 900119864 minus 01 604 0GODE 918119864 minus 01 604 minus2NLLS constrained 860119864 minus 01 704 minus4Solis and Wets 430119864 minus 01 717 minus6NLLS unconstrained 843119864 minus 01 784 minus8

Table 6 Raw 119901 values (IPOP-CMA-ES is the control algorithm) forthe 4-by-4 degree polynomial rational function

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 214119864 minus 01 203119864 minus 12

DE 190119864 minus 01 325119864 minus 14

GA 000119864 + 00 552119864 minus 84

MTS-LS1 000119864 + 00 631119864 minus 84

GODE 000119864 + 00 540119864 minus 84

NLLS constrained 000119864 + 00 199119864 minus 83

Solis and Wets 000119864 + 00 160119864 minus 86

NLLS unconstrained 000119864 + 00 591119864 minus 84

Table 7 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomialrational function

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 380119864 minus 01 203119864 minus 12

radic

DE 380119864 minus 01 650119864 minus 14radic

GA 000119864 + 00radic

378119864 minus 83radic

MTS-LS1 000119864 + 00radic

378119864 minus 83radic

GODE 000119864 + 00radic

378119864 minus 83radic

NLLS constrained 000119864 + 00radic

596119864 minus 83radic

Solis and Wets 000119864 + 00radic

128119864 minus 85radic

NLLS unconstrained 000119864 + 00radic

378119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

mathematical models studied are compared in Table 23selecting for them the results provided by the best opti-mization technique in each case As explained most curveequations produce very accurate results (only the 9th degreepolynomial falls clearly behind) although the number ofwinsfor equations with similar average fitness may differ due tosmall variations on individual executions As a consequenceof this the question of which model is best suited in this caserequires deeper analysis

From a strictly numerical point of view the 8-term Gaussseries fitted with the IPOP-CMA-ES algorithm produces thebest results so it is the first logical choice In additionwe consider the nature of the biological process modeled

Table 8Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 8-termFourier series

Mean fitness Ranking nWinsSelf-Adaptive DE 944119864 minus 01 126 8IPOP-CMA-ES 919119864 minus 01 191 6GODE 813119864 minus 01 346 4DE 721119864 minus 01 434 2GA 699119864 minus 01 479 0NLLS unconstrained 620119864 minus 01 546 minus2MTS-LS1 416119864 minus 01 729 minus4NLLS constrained 387119864 minus 01 749 minus6Solis and Wets 000119864 + 00 900 minus8

Table 9 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 8-term Fourier series

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueIPOP-CMA-ES 167119864 minus 04 589119864 minus 14

GODE 000119864 + 00 158119864 minus 83

DE 000119864 + 00 109119864 minus 83

GA 000119864 + 00 867119864 minus 82

NLLS unconstrained 000119864 + 00 808119864 minus 83

MTS-LS1 000119864 + 00 643119864 minus 84

NLLS constrained 000119864 + 00 636119864 minus 84

Solis and Wets 000119864 + 00 621119864 minus 84

Table 10 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueIPOP-CMA-ES 167119864 minus 04

radic589119864 minus 14

radic

GODE 000119864 + 00radic

632119864 minus 83radic

DE 000119864 + 00radic

544119864 minus 83radic

GA 000119864 + 00radic

173119864 minus 81radic

NLLS unconstrained 000119864 + 00radic

242119864 minus 82radic

MTS-LS1 000119864 + 00radic

497119864 minus 83radic

NLLS constrained 000119864 + 00radic

497119864 minus 83radic

Solis and Wets 000119864 + 00radic

497119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(AMPA synaptic receptor activation after glutamate release)Although it cannot be modeled analytically (as explained inSection 21) we know that a key parameter of this processis neurotransmitter concentration From Fickrsquos second lawof diffusion we know that this concentration must follow anexponential decay so it is logical to assume that some sortof exponential term has to be present in the AMPA openreceptors curve This favors the 8-term Gauss series and 2-term exponential function as the most biologically feasibleequations since both contain exponential components

To continue our analysis we performed an exploratorygraphical study of the curve solutions provided by thedifferent optimization algorithms Figure 4 shows an exampleof curve-fit for every equation fitted using the corresponding

Abstract and Applied Analysis 11

Table 11 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the8-term Gauss series

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 100 8NLLS unconstrained 973119864 minus 01 223 6NLLS constrained 931119864 minus 01 282 4GA 780119864 minus 01 395 2GODE 524119864 minus 01 500 0Self-Adaptive DE 321119864 minus 01 679 minus3MTS-LS1 323119864 minus 01 685 minus3DE 297119864 minus 01 735 minus6Solis and Wets 000119864 + 00 900 minus8

Table 12 Raw119901 values (IPOP-CMA-ES is the control algorithm) forthe 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12 190119864 minus 83

NLLS constrained 000119864 + 00 133119864 minus 83

GA 000119864 + 00 622119864 minus 84

GODE 000119864 + 00 630119864 minus 84

Self-Adaptive DE 000119864 + 00 628119864 minus 84

MTS-LS1 000119864 + 00 632119864 minus 84

DE 000119864 + 00 629119864 minus 84

Solis and Wets 000119864 + 00 475119864 minus 111

Table 13 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12

radic435119864 minus 83

radic

NLLS constrained 000119864 + 00radic

435119864 minus 83radic

GA 000119864 + 00radic

435119864 minus 83radic

GODE 000119864 + 00radic

435119864 minus 83radic

Self-Adaptive DE 000119864 + 00radic

435119864 minus 83radic

MTS-LS1 000119864 + 00radic

435119864 minus 83radic

DE 000119864 + 00radic

435119864 minus 83radic

Solis and Wets 000119864 + 00radic

380119864 minus 110radic

radicmeans that there are statistical differences with significance level 120572 = 005

best algorithm for each case according to our experiments (Asin the glutamate concentration curve example the synapseconfiguration used here had an AMPA receptor concentra-tion of 1967molecules120583m2 a glutamate transporter con-centration of 11560molecules120583m2 a postsynaptic length of229 nm a total apposition length of 305 nm and a 20 nmwidesynaptic cleft For more details on these parameters pleaserefer to [11]) This study revealed several new findings

(1) The 8-termGauss 4-by-4 degree polynomial rationaland 2-term exponential functions are clearly the bestcandidates for modeling the AMPA open receptorscurve This is consistent with the previously men-tioned numerical results (Table 23)

Table 14 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the2-term exponential function

Mean fitness Ranking nWinsSelf-Adaptive DE 997119864 minus 01 169 8DE 996119864 minus 01 186 6IPOP-CMA-ES 997119864 minus 01 246 4Solis and Wets 732119864 minus 01 484 2MTS-LS1 627119864 minus 01 543 0NLLS unconstrained 620119864 minus 01 573 minus2NLLS constrained 347119864 minus 01 725 minus4GODE 225119864 minus 01 768 minus6GA 165119864 minus 01 806 minus8

Table 15 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 2-term exponential function

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 338119864 minus 01 611119864 minus 10

IPOP-CMA-ES 900119864 minus 06 979119864 minus 54

Solis and Wets 000119864 + 00 631119864 minus 84

MTS-LS1 000119864 + 00 626119864 minus 84

NLLS unconstrained 000119864 + 00 631119864 minus 84

NLLS constrained 000119864 + 00 632119864 minus 84

GODE 000119864 + 00 604119864 minus 84

GA 000119864 + 00 526119864 minus 84

Table 16 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 2-term exponentialfunction

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 338119864 minus 01 611119864 minus 10

radic

IPOP-CMA-ES 180119864 minus 05radic

196119864 minus 53radic

Solis and Wets 000119864 + 00radic

423119864 minus 83radic

MTS-LS1 000119864 + 00radic

423119864 minus 83radic

NLLS unconstrained 000119864 + 00radic

423119864 minus 83radic

NLLS constrained 000119864 + 00radic

423119864 minus 83radic

GODE 000119864 + 00radic

423119864 minus 83radic

GA 000119864 + 00radic

421119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(2) The curve-fitting calculated for the 8-term Gaussseries presents an abnormal perturbation near thecurve peak (see the detail in Figure 4(a)) This seemsto be small enough so the overall fitness is not affectedbut implies that apparently the adjusted curves modelthe synaptic phenomenon in a less realistic way thanother options studied

(3) The 9th degree polynomial produces low quality solu-tions clearly surpassed by the first three equations

(4) The 8-term Fourier series seems to be overfitted tothe training dataThenumerical results are apparentlygood which indicates that the curve points tested

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

Abstract and Applied Analysis 7

31 Parameter Tuning of the Selected Metaheuristics Asdescribed in Section 24 to conduct our experimentationwe have followed a fractional design based on orthogonalmatrices according to the Taguchi method This methodallows the execution of a limited number of configurations (amaximum of 27 in our case) yet yielding significant resultsFurthermore to avoid any overfitting to the experimentaldata set the parameter tuning of the algorithms was done ona subset of curves (10 randomly chosen curves) prior to usingthe selected values in the overall comparison Table 1 containsfor each algorithm all their relevant parameters highlightingin bold the final selected values among all the tested onesaccording to the measures reported by the Taguchi method

Once the best configuration for each algorithm has beendetermined we ran each of them on both synaptic curvesproblems for all the models and curves and conducting 25repetitions per curve In the case of the selected metaheuris-tics we constrained search space to the following interval[minus500 500] to allow faster convergence On the other handto avoid any bias in the results we ran the baseline algorithmboth constrained to the same interval and unconstrained

32 Glutamate Concentration Curves Problem As we statedbefore in the 500 glutamate concentration curves the onlyparameter that needs to be optimized is the 120592 coefficient ofthe proposed exponential decay function Thus it is a simpleoptimization problem that should be solved with little issuesby most algorithms as it is shown in Table 2 In this table wecan observe that most algorithms are able to fit the curve witha precision of 99 The only three algorithms not reachingthat precision get trapped a few times in low quality localoptima which prevents them to reach the same accuracyThis is especially the case of the NLLS algorithm both in theconstrained and unconstrained versions

Due to the nonexistent differences in the performance ofmost of the algorithms the statistical validation did not revealany significant differences except for the two versions of thebaseline algorithm This first result is a good start point thatsuggests that more advanced metaheuristics can deal withthese problems more effectively than the NLLS algorithmused in previous studies [11]

33 AMPAOpen Receptors Curves Problem To providemoreinsight on the results obtained for the AMPA open receptorscurves problem we have decided to report the results andconduct the statistical validation on each of the curvesapproximation models individually and then to carry outan overall comparison considering all the models togetheras a large validation data set The results show that thereare three algorithms (IPOP-CMA-ES Self-Adaptive DE andDE) which systematically rank among the best algorithmsand in most of the cases are significantly better than thebaseline algorithm NLLS

331 Results with the 4-by-4 Degree Polynomial RationalFunction The results for the 4-by-4 degree polynomial ratio-nal function are reported in Table 5 As can be observedthe average attained precision is very high for most of the

Table 1 Parameter tuning of the selected metaheuristics

Parameter values of GApopSize 100 200 400pcx 01 05 09pm 001 005 01

Parameter values of DEpopSize 25 50 100119865 01 05 09CR 01 05 09

Parameter values of Self-Adaptive DEpopSize 25 50 100120591119865

005 01 02120591CR 005 01 02

Parameter values of GODEpopSize 25 50 100119865 01 05 09CR 01 05 09godeProb 02 04 08

Parameter values of Solis and WetsmaxSuccess 2 5 10maxFailed 1 3 5adjustSuccess 2 4 6adjustFailed 025 05 075delta 06 12 24

Parameter values of MTS-LS1(initial) SR 50 of the search space(min) SR 1119890 minus 14

adjustFailed 2 3 5adjustMin 25 5 10moveLeft 025 05 1moveRight 025 05 1

Parameter values of CMA-ES120590 001 01 1Active CMA no yesCMA-ES variant CMA-ES IPOP-CMA-ES

Table 2Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisonswith the other algorithms on the glutamateconcentration problem

Mean fitness Ranking nWinsSelf-Adaptive DE 990119864 minus 01 416 3DE 990119864 minus 01 416 3GA 990119864 minus 01 416 3Solis and Wets 990119864 minus 01 416 3GODE 990119864 minus 01 416 3IPOP-CMA-ES 990119864 minus 01 416 3MTS-LS1 983119864 minus 01 423 minus4NLLS constrained 953119864 minus 01 791 minus7NLLS unconstrained 953119864 minus 01 791 minus7

algorithms (with the exception of the Solis and Wets algo-rithm that alternates good and poor runs thus obtaining a low

8 Abstract and Applied Analysis

average fitness) especially in the case of the top three algo-rithms (IPOP-CMA-ES Self-Adaptive DE and DE) whichreach a precision as high as 999 The 119901 values reported bythe statistical validation (Table 6 for unadjusted 119901 values andTable 7 for the adjusted ones) reveal significant differencesbetween the best performing algorithm (IPOP-CMA-ES) andall the other algorithms except for Self-Adaptive DE andDE according to the Friedman test (Wilcoxon test doesreveal significant differences) The prevalence of these threealgorithms will be a trend for most of the problems as resultsbe will confirmed in the following sections

332 Results with the 8-Term Fourier Series For the 8-termFourier series results are similar in terms of the dominantalgorithms (see Table 8) In this case the Self-Adaptive DEobtained the highest ranking number of wins and averageprecision followed by IPOP-CMA-ES and the two remainingDE alternatives With this function the attained precisionhas slightly decreased (from 72 of the DE algorithm to94of the Self-AdaptiveDE) but again severalmetaheuristics(all the population-based algorithms which also includes theGA) have outperformed the reference NLLS algorithm (inboth constrained and unconstrained versions) This could bedue to the highest complexity of the problem (18 dimensionscompared to the 9 of the previous model) Also in thisproblem the performance of the two local searches (Solisand Wets and MTS-LS1) importantly degrades and this willalso be a trend for the remainder problems The statisticalvalidation whose results are presented in Tables 9 and10 found significant differences between the Self-AdaptiveDE algorithm and all the other algorithms for both tests(Friedman and Wilcoxon)

333 Results with the 8-Term Gauss Series The results forthis model break the trend started in the previous two casesthat follows in the remaining two functions Table 11 depictsthe results for the 8-term Gauss series As can be seen thebest algorithm is again the IPOP-CMA-ES algorithm whichagain reaches a very high precision of 99 However Self-Adaptive DE and DE perform poorly this time in spite of theproblem not being much bigger (25 parameters against 18)This means that it is probably the nature of the componentsof the Gauss series what makes the problem harder to solvefor DE-based algorithms (it is a summation of squaredexponential terms with two correlated coefficients whoseinterdependenciesmight be better captured by the adaptationof the covariance matrix in IPOP-CMA-ES) Surprisinglythe NLLS approaches obtain very good results with thismodel (93 and 97 precision for both constrained andunconstrained resp) It is also interesting to note that theSolis andWets algorithm was unable to converge to any goodsolution in any run Finally the statistical validation whosedetailed information is given in Tables 12 and 13 reportssignificant differences between the IPOP-CMA-ES algorithmand all the others with both statistical tests

334 Results with the 2-Term Exponential Function In thecase of the 2-term exponential function both the Self-Adaptive DE and the DE algorithms obtain very goodresults the former being the best algorithm in this problem(with a precision of more than 99 and the highest rank-ing and number of wins) As in previous cases (with theexception of the 8-term Gauss series) the NLLS algorithmperforms poorly in comparison with most other techniquesAnother interesting thing on this model is that both localsearches (MTS-LS1 and Solis and Wets) greatly improvedtheir performance compared with other problems In thiscase this improvement should be probably due to the smallproblem size that allows an easier optimization componentby component Finally the statistical validation (Tables 15and 16) reports significant differences between the referencealgorithm (Self-Adaptive DE) and all the other algorithmsexcept for the DE algorithm in the Friedman test

335 Results with the 9th Degree Polynomial The last func-tion used in the experimentation the 9th degree polynomialis an interesting case as we can classify studied algorithmsinto two groups those that converge to (what seems) astrong attractor and those that are unable to find any goodsolution In the first group we have the three aforementionedalgorithms that usually exhibit the best performance IPOP-CMA-ES Self-Adaptive DE and DE plus both NLLS con-figurations Consequently the second group is made up ofthe GA the GODE algorithm and the two local searchesThe algorithms in the first group normally converge withsome unsuccessful runs in the case of NLLS constrainedand IPOP-CMA-ES to the same attractor and thus theiraverage precision and ranking is very similar (Table 17) Smallvariations on the fitness introduce some differences in thenumber of wins although they are not very important Itshould be remarked that in contrast to what happens withthe other four models none of the selected algorithms isable to reach a high precision with this problem (a highestvalue of 84 compared to values in the range of 94 to99) This probably means that a 9th degree polynomial isnot the best curve to fit the experimental data which makessense if we consider the exponential decay processes takingpart in the biological process Section 34 will discuss thisissue in detail Although the differences are very small interms of fitness among the best algorithms the statisticalvalidation is able to find significant differences between theunconstrained configuration of the NLLS algorithm and allthe other algorithms with the Wilcoxon test (see Tables 18and 19 for details) However Friedmanrsquos test is unable to findsignificant differences betweenNLLSunconstrained and Self-Adaptive DE DE and NLLS constrained

336 Overall Comparison In this section we offer an overallcomparison of the algorithms for the five AMPA openreceptors curves problems To make this comparison weconsidered the results on the five problems altogether thuscomparing the 2500 fitness values at the same time Table 20summarizes the results From these data it is clear toconclude that the IPOP-CMA-ES algorithm obtains the best

Abstract and Applied Analysis 9G

luta

mat

e con

cent

ratio

n (m

olmiddotL

minus1)

0005

0004

0003

0002

0001

0000

00 02 04 06 08 10

Time (ms)

Figure 3 Example of curve-fit solution for the glutamate concen-tration problem The continuous line shows a solution obtained byfitting the exponential decaymodel proposed with the Self-AdaptiveDE algorithm The dashed line shows the reference data obtainedfrom biochemical simulations For this example 2 = 0990

results with an average precision of 94 the best overallranking and the highest number of wins being the bestalgorithm in all the pairwise comparisons Following IPOP-CMA-ES two DE variants (Self-Adaptive DE and classic DE)take places two and three in the ranking It is importantto note that three of the considered metaheuristics are ableto outperform the NLLS algorithm which was the base ofthe previous work in [11] It is also relevant to the poorperformance of the two considered local searches whichseem to be inadequate for this type of problems Finallythe same statistical validation as for the individual problemshas been followed and the results reported in Tables 21 and22 confirm the superior performance of the IPOP-CMA-ES algorithm obtaining significant differences with all thealgorithms and both statistical tests

34 Biological Interpretation of the Results To understand thebiological implications of this study we have to consider eachof the two curve-fitting problems independently (glutamateconcentration and AMPA open receptors)

341 Glutamate Concentration Curves In the case of theglutamate concentration curves the results shown in Tables2 3 and 4 illustrate two important aspects Firstly the highfitness values obtained with many optimization techniques(most of them around 99) prove that the glutamate dif-fusion equation proposed in Section 21 (equation (9)) isas expected a very accurate model of this phenomenonThis equation therefore can be efficiently used to interpretand predict the glutamate diffusion process given that theadequate means for calculating the 120592 coefficient are usedSecondly selecting the appropriate curve-fitting technique isalso an important aspect in this case since general purposecommonly used algorithms (such as the NLLS used in [11])

Table 3 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the glutamate concentration problem

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 657119864 minus 01 898119864 minus 03

NLLS constrained 000119864 + 00 147119864 minus 70

NLLS unconstrained 000119864 + 00 147119864 minus 70

Table 4 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the glutamate concentra-tion problem

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 100119864 + 00 539119864 minus 02

NLLS constrained 000119864 + 00radic

118119864 minus 69radic

NLLS unconstrained 000119864 + 00radic

118119864 minus 69radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

are now known to produce suboptimal results Figure 3 showsan example of a glutamate concentration curve solution (Thesynapse configuration used here had an AMPA receptor con-centration of 1967molecules120583m2 a glutamate transporterconcentration of 11560molecules120583m2 a postsynaptic lengthof 229 nm a total apposition length of 305 nm and a 20 nmwide synaptic cleft For more details on these parametersplease refer to [11]) In this example the glutamate initialconcentration was119862

0= 4749times10

minus3molL and its coefficientof diffusion 119863

119860= 033 120583m2ms The curve-fitting solution

produced 120592 = 6577 times 10minus6 resulting in the following

equation

Φ119860 (119905) = 4749 times 10

minus3times 119890minus033times119905(6577times10

minus6) (19)

It is important to remember that this equation is thesolution for one particular synaptic configuration Each oneof the total 500 configurations produced a different glutamateconcentration curve from which a different curve-fittingsolution (different 120592 value) was obtained

342 AMPA Open Receptors Curves The case of the openAMPA receptor curves requires further study Optimizationresults (Tables 5 to 22) show that when using the appropriatemetaheuristic most mathematical equations studied can befitted to the experimental data with excellent numericalresults (all curves can be fitted with more than 90 accuracywith the exception of the 9th degree polynomial) The five

10 Abstract and Applied Analysis

Table 5Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 4-by-4degree polynomial rational function

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 186 8Self-Adaptive DE 999119864 minus 01 208 5DE 999119864 minus 01 209 5GA 935119864 minus 01 486 2MTS-LS1 900119864 minus 01 604 0GODE 918119864 minus 01 604 minus2NLLS constrained 860119864 minus 01 704 minus4Solis and Wets 430119864 minus 01 717 minus6NLLS unconstrained 843119864 minus 01 784 minus8

Table 6 Raw 119901 values (IPOP-CMA-ES is the control algorithm) forthe 4-by-4 degree polynomial rational function

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 214119864 minus 01 203119864 minus 12

DE 190119864 minus 01 325119864 minus 14

GA 000119864 + 00 552119864 minus 84

MTS-LS1 000119864 + 00 631119864 minus 84

GODE 000119864 + 00 540119864 minus 84

NLLS constrained 000119864 + 00 199119864 minus 83

Solis and Wets 000119864 + 00 160119864 minus 86

NLLS unconstrained 000119864 + 00 591119864 minus 84

Table 7 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomialrational function

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 380119864 minus 01 203119864 minus 12

radic

DE 380119864 minus 01 650119864 minus 14radic

GA 000119864 + 00radic

378119864 minus 83radic

MTS-LS1 000119864 + 00radic

378119864 minus 83radic

GODE 000119864 + 00radic

378119864 minus 83radic

NLLS constrained 000119864 + 00radic

596119864 minus 83radic

Solis and Wets 000119864 + 00radic

128119864 minus 85radic

NLLS unconstrained 000119864 + 00radic

378119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

mathematical models studied are compared in Table 23selecting for them the results provided by the best opti-mization technique in each case As explained most curveequations produce very accurate results (only the 9th degreepolynomial falls clearly behind) although the number ofwinsfor equations with similar average fitness may differ due tosmall variations on individual executions As a consequenceof this the question of which model is best suited in this caserequires deeper analysis

From a strictly numerical point of view the 8-term Gaussseries fitted with the IPOP-CMA-ES algorithm produces thebest results so it is the first logical choice In additionwe consider the nature of the biological process modeled

Table 8Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 8-termFourier series

Mean fitness Ranking nWinsSelf-Adaptive DE 944119864 minus 01 126 8IPOP-CMA-ES 919119864 minus 01 191 6GODE 813119864 minus 01 346 4DE 721119864 minus 01 434 2GA 699119864 minus 01 479 0NLLS unconstrained 620119864 minus 01 546 minus2MTS-LS1 416119864 minus 01 729 minus4NLLS constrained 387119864 minus 01 749 minus6Solis and Wets 000119864 + 00 900 minus8

Table 9 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 8-term Fourier series

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueIPOP-CMA-ES 167119864 minus 04 589119864 minus 14

GODE 000119864 + 00 158119864 minus 83

DE 000119864 + 00 109119864 minus 83

GA 000119864 + 00 867119864 minus 82

NLLS unconstrained 000119864 + 00 808119864 minus 83

MTS-LS1 000119864 + 00 643119864 minus 84

NLLS constrained 000119864 + 00 636119864 minus 84

Solis and Wets 000119864 + 00 621119864 minus 84

Table 10 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueIPOP-CMA-ES 167119864 minus 04

radic589119864 minus 14

radic

GODE 000119864 + 00radic

632119864 minus 83radic

DE 000119864 + 00radic

544119864 minus 83radic

GA 000119864 + 00radic

173119864 minus 81radic

NLLS unconstrained 000119864 + 00radic

242119864 minus 82radic

MTS-LS1 000119864 + 00radic

497119864 minus 83radic

NLLS constrained 000119864 + 00radic

497119864 minus 83radic

Solis and Wets 000119864 + 00radic

497119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(AMPA synaptic receptor activation after glutamate release)Although it cannot be modeled analytically (as explained inSection 21) we know that a key parameter of this processis neurotransmitter concentration From Fickrsquos second lawof diffusion we know that this concentration must follow anexponential decay so it is logical to assume that some sortof exponential term has to be present in the AMPA openreceptors curve This favors the 8-term Gauss series and 2-term exponential function as the most biologically feasibleequations since both contain exponential components

To continue our analysis we performed an exploratorygraphical study of the curve solutions provided by thedifferent optimization algorithms Figure 4 shows an exampleof curve-fit for every equation fitted using the corresponding

Abstract and Applied Analysis 11

Table 11 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the8-term Gauss series

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 100 8NLLS unconstrained 973119864 minus 01 223 6NLLS constrained 931119864 minus 01 282 4GA 780119864 minus 01 395 2GODE 524119864 minus 01 500 0Self-Adaptive DE 321119864 minus 01 679 minus3MTS-LS1 323119864 minus 01 685 minus3DE 297119864 minus 01 735 minus6Solis and Wets 000119864 + 00 900 minus8

Table 12 Raw119901 values (IPOP-CMA-ES is the control algorithm) forthe 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12 190119864 minus 83

NLLS constrained 000119864 + 00 133119864 minus 83

GA 000119864 + 00 622119864 minus 84

GODE 000119864 + 00 630119864 minus 84

Self-Adaptive DE 000119864 + 00 628119864 minus 84

MTS-LS1 000119864 + 00 632119864 minus 84

DE 000119864 + 00 629119864 minus 84

Solis and Wets 000119864 + 00 475119864 minus 111

Table 13 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12

radic435119864 minus 83

radic

NLLS constrained 000119864 + 00radic

435119864 minus 83radic

GA 000119864 + 00radic

435119864 minus 83radic

GODE 000119864 + 00radic

435119864 minus 83radic

Self-Adaptive DE 000119864 + 00radic

435119864 minus 83radic

MTS-LS1 000119864 + 00radic

435119864 minus 83radic

DE 000119864 + 00radic

435119864 minus 83radic

Solis and Wets 000119864 + 00radic

380119864 minus 110radic

radicmeans that there are statistical differences with significance level 120572 = 005

best algorithm for each case according to our experiments (Asin the glutamate concentration curve example the synapseconfiguration used here had an AMPA receptor concentra-tion of 1967molecules120583m2 a glutamate transporter con-centration of 11560molecules120583m2 a postsynaptic length of229 nm a total apposition length of 305 nm and a 20 nmwidesynaptic cleft For more details on these parameters pleaserefer to [11]) This study revealed several new findings

(1) The 8-termGauss 4-by-4 degree polynomial rationaland 2-term exponential functions are clearly the bestcandidates for modeling the AMPA open receptorscurve This is consistent with the previously men-tioned numerical results (Table 23)

Table 14 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the2-term exponential function

Mean fitness Ranking nWinsSelf-Adaptive DE 997119864 minus 01 169 8DE 996119864 minus 01 186 6IPOP-CMA-ES 997119864 minus 01 246 4Solis and Wets 732119864 minus 01 484 2MTS-LS1 627119864 minus 01 543 0NLLS unconstrained 620119864 minus 01 573 minus2NLLS constrained 347119864 minus 01 725 minus4GODE 225119864 minus 01 768 minus6GA 165119864 minus 01 806 minus8

Table 15 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 2-term exponential function

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 338119864 minus 01 611119864 minus 10

IPOP-CMA-ES 900119864 minus 06 979119864 minus 54

Solis and Wets 000119864 + 00 631119864 minus 84

MTS-LS1 000119864 + 00 626119864 minus 84

NLLS unconstrained 000119864 + 00 631119864 minus 84

NLLS constrained 000119864 + 00 632119864 minus 84

GODE 000119864 + 00 604119864 minus 84

GA 000119864 + 00 526119864 minus 84

Table 16 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 2-term exponentialfunction

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 338119864 minus 01 611119864 minus 10

radic

IPOP-CMA-ES 180119864 minus 05radic

196119864 minus 53radic

Solis and Wets 000119864 + 00radic

423119864 minus 83radic

MTS-LS1 000119864 + 00radic

423119864 minus 83radic

NLLS unconstrained 000119864 + 00radic

423119864 minus 83radic

NLLS constrained 000119864 + 00radic

423119864 minus 83radic

GODE 000119864 + 00radic

423119864 minus 83radic

GA 000119864 + 00radic

421119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(2) The curve-fitting calculated for the 8-term Gaussseries presents an abnormal perturbation near thecurve peak (see the detail in Figure 4(a)) This seemsto be small enough so the overall fitness is not affectedbut implies that apparently the adjusted curves modelthe synaptic phenomenon in a less realistic way thanother options studied

(3) The 9th degree polynomial produces low quality solu-tions clearly surpassed by the first three equations

(4) The 8-term Fourier series seems to be overfitted tothe training dataThenumerical results are apparentlygood which indicates that the curve points tested

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 8: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

8 Abstract and Applied Analysis

average fitness) especially in the case of the top three algo-rithms (IPOP-CMA-ES Self-Adaptive DE and DE) whichreach a precision as high as 999 The 119901 values reported bythe statistical validation (Table 6 for unadjusted 119901 values andTable 7 for the adjusted ones) reveal significant differencesbetween the best performing algorithm (IPOP-CMA-ES) andall the other algorithms except for Self-Adaptive DE andDE according to the Friedman test (Wilcoxon test doesreveal significant differences) The prevalence of these threealgorithms will be a trend for most of the problems as resultsbe will confirmed in the following sections

332 Results with the 8-Term Fourier Series For the 8-termFourier series results are similar in terms of the dominantalgorithms (see Table 8) In this case the Self-Adaptive DEobtained the highest ranking number of wins and averageprecision followed by IPOP-CMA-ES and the two remainingDE alternatives With this function the attained precisionhas slightly decreased (from 72 of the DE algorithm to94of the Self-AdaptiveDE) but again severalmetaheuristics(all the population-based algorithms which also includes theGA) have outperformed the reference NLLS algorithm (inboth constrained and unconstrained versions) This could bedue to the highest complexity of the problem (18 dimensionscompared to the 9 of the previous model) Also in thisproblem the performance of the two local searches (Solisand Wets and MTS-LS1) importantly degrades and this willalso be a trend for the remainder problems The statisticalvalidation whose results are presented in Tables 9 and10 found significant differences between the Self-AdaptiveDE algorithm and all the other algorithms for both tests(Friedman and Wilcoxon)

333 Results with the 8-Term Gauss Series The results forthis model break the trend started in the previous two casesthat follows in the remaining two functions Table 11 depictsthe results for the 8-term Gauss series As can be seen thebest algorithm is again the IPOP-CMA-ES algorithm whichagain reaches a very high precision of 99 However Self-Adaptive DE and DE perform poorly this time in spite of theproblem not being much bigger (25 parameters against 18)This means that it is probably the nature of the componentsof the Gauss series what makes the problem harder to solvefor DE-based algorithms (it is a summation of squaredexponential terms with two correlated coefficients whoseinterdependenciesmight be better captured by the adaptationof the covariance matrix in IPOP-CMA-ES) Surprisinglythe NLLS approaches obtain very good results with thismodel (93 and 97 precision for both constrained andunconstrained resp) It is also interesting to note that theSolis andWets algorithm was unable to converge to any goodsolution in any run Finally the statistical validation whosedetailed information is given in Tables 12 and 13 reportssignificant differences between the IPOP-CMA-ES algorithmand all the others with both statistical tests

334 Results with the 2-Term Exponential Function In thecase of the 2-term exponential function both the Self-Adaptive DE and the DE algorithms obtain very goodresults the former being the best algorithm in this problem(with a precision of more than 99 and the highest rank-ing and number of wins) As in previous cases (with theexception of the 8-term Gauss series) the NLLS algorithmperforms poorly in comparison with most other techniquesAnother interesting thing on this model is that both localsearches (MTS-LS1 and Solis and Wets) greatly improvedtheir performance compared with other problems In thiscase this improvement should be probably due to the smallproblem size that allows an easier optimization componentby component Finally the statistical validation (Tables 15and 16) reports significant differences between the referencealgorithm (Self-Adaptive DE) and all the other algorithmsexcept for the DE algorithm in the Friedman test

335 Results with the 9th Degree Polynomial The last func-tion used in the experimentation the 9th degree polynomialis an interesting case as we can classify studied algorithmsinto two groups those that converge to (what seems) astrong attractor and those that are unable to find any goodsolution In the first group we have the three aforementionedalgorithms that usually exhibit the best performance IPOP-CMA-ES Self-Adaptive DE and DE plus both NLLS con-figurations Consequently the second group is made up ofthe GA the GODE algorithm and the two local searchesThe algorithms in the first group normally converge withsome unsuccessful runs in the case of NLLS constrainedand IPOP-CMA-ES to the same attractor and thus theiraverage precision and ranking is very similar (Table 17) Smallvariations on the fitness introduce some differences in thenumber of wins although they are not very important Itshould be remarked that in contrast to what happens withthe other four models none of the selected algorithms isable to reach a high precision with this problem (a highestvalue of 84 compared to values in the range of 94 to99) This probably means that a 9th degree polynomial isnot the best curve to fit the experimental data which makessense if we consider the exponential decay processes takingpart in the biological process Section 34 will discuss thisissue in detail Although the differences are very small interms of fitness among the best algorithms the statisticalvalidation is able to find significant differences between theunconstrained configuration of the NLLS algorithm and allthe other algorithms with the Wilcoxon test (see Tables 18and 19 for details) However Friedmanrsquos test is unable to findsignificant differences betweenNLLSunconstrained and Self-Adaptive DE DE and NLLS constrained

336 Overall Comparison In this section we offer an overallcomparison of the algorithms for the five AMPA openreceptors curves problems To make this comparison weconsidered the results on the five problems altogether thuscomparing the 2500 fitness values at the same time Table 20summarizes the results From these data it is clear toconclude that the IPOP-CMA-ES algorithm obtains the best

Abstract and Applied Analysis 9G

luta

mat

e con

cent

ratio

n (m

olmiddotL

minus1)

0005

0004

0003

0002

0001

0000

00 02 04 06 08 10

Time (ms)

Figure 3 Example of curve-fit solution for the glutamate concen-tration problem The continuous line shows a solution obtained byfitting the exponential decaymodel proposed with the Self-AdaptiveDE algorithm The dashed line shows the reference data obtainedfrom biochemical simulations For this example 2 = 0990

results with an average precision of 94 the best overallranking and the highest number of wins being the bestalgorithm in all the pairwise comparisons Following IPOP-CMA-ES two DE variants (Self-Adaptive DE and classic DE)take places two and three in the ranking It is importantto note that three of the considered metaheuristics are ableto outperform the NLLS algorithm which was the base ofthe previous work in [11] It is also relevant to the poorperformance of the two considered local searches whichseem to be inadequate for this type of problems Finallythe same statistical validation as for the individual problemshas been followed and the results reported in Tables 21 and22 confirm the superior performance of the IPOP-CMA-ES algorithm obtaining significant differences with all thealgorithms and both statistical tests

34 Biological Interpretation of the Results To understand thebiological implications of this study we have to consider eachof the two curve-fitting problems independently (glutamateconcentration and AMPA open receptors)

341 Glutamate Concentration Curves In the case of theglutamate concentration curves the results shown in Tables2 3 and 4 illustrate two important aspects Firstly the highfitness values obtained with many optimization techniques(most of them around 99) prove that the glutamate dif-fusion equation proposed in Section 21 (equation (9)) isas expected a very accurate model of this phenomenonThis equation therefore can be efficiently used to interpretand predict the glutamate diffusion process given that theadequate means for calculating the 120592 coefficient are usedSecondly selecting the appropriate curve-fitting technique isalso an important aspect in this case since general purposecommonly used algorithms (such as the NLLS used in [11])

Table 3 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the glutamate concentration problem

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 657119864 minus 01 898119864 minus 03

NLLS constrained 000119864 + 00 147119864 minus 70

NLLS unconstrained 000119864 + 00 147119864 minus 70

Table 4 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the glutamate concentra-tion problem

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 100119864 + 00 539119864 minus 02

NLLS constrained 000119864 + 00radic

118119864 minus 69radic

NLLS unconstrained 000119864 + 00radic

118119864 minus 69radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

are now known to produce suboptimal results Figure 3 showsan example of a glutamate concentration curve solution (Thesynapse configuration used here had an AMPA receptor con-centration of 1967molecules120583m2 a glutamate transporterconcentration of 11560molecules120583m2 a postsynaptic lengthof 229 nm a total apposition length of 305 nm and a 20 nmwide synaptic cleft For more details on these parametersplease refer to [11]) In this example the glutamate initialconcentration was119862

0= 4749times10

minus3molL and its coefficientof diffusion 119863

119860= 033 120583m2ms The curve-fitting solution

produced 120592 = 6577 times 10minus6 resulting in the following

equation

Φ119860 (119905) = 4749 times 10

minus3times 119890minus033times119905(6577times10

minus6) (19)

It is important to remember that this equation is thesolution for one particular synaptic configuration Each oneof the total 500 configurations produced a different glutamateconcentration curve from which a different curve-fittingsolution (different 120592 value) was obtained

342 AMPA Open Receptors Curves The case of the openAMPA receptor curves requires further study Optimizationresults (Tables 5 to 22) show that when using the appropriatemetaheuristic most mathematical equations studied can befitted to the experimental data with excellent numericalresults (all curves can be fitted with more than 90 accuracywith the exception of the 9th degree polynomial) The five

10 Abstract and Applied Analysis

Table 5Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 4-by-4degree polynomial rational function

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 186 8Self-Adaptive DE 999119864 minus 01 208 5DE 999119864 minus 01 209 5GA 935119864 minus 01 486 2MTS-LS1 900119864 minus 01 604 0GODE 918119864 minus 01 604 minus2NLLS constrained 860119864 minus 01 704 minus4Solis and Wets 430119864 minus 01 717 minus6NLLS unconstrained 843119864 minus 01 784 minus8

Table 6 Raw 119901 values (IPOP-CMA-ES is the control algorithm) forthe 4-by-4 degree polynomial rational function

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 214119864 minus 01 203119864 minus 12

DE 190119864 minus 01 325119864 minus 14

GA 000119864 + 00 552119864 minus 84

MTS-LS1 000119864 + 00 631119864 minus 84

GODE 000119864 + 00 540119864 minus 84

NLLS constrained 000119864 + 00 199119864 minus 83

Solis and Wets 000119864 + 00 160119864 minus 86

NLLS unconstrained 000119864 + 00 591119864 minus 84

Table 7 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomialrational function

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 380119864 minus 01 203119864 minus 12

radic

DE 380119864 minus 01 650119864 minus 14radic

GA 000119864 + 00radic

378119864 minus 83radic

MTS-LS1 000119864 + 00radic

378119864 minus 83radic

GODE 000119864 + 00radic

378119864 minus 83radic

NLLS constrained 000119864 + 00radic

596119864 minus 83radic

Solis and Wets 000119864 + 00radic

128119864 minus 85radic

NLLS unconstrained 000119864 + 00radic

378119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

mathematical models studied are compared in Table 23selecting for them the results provided by the best opti-mization technique in each case As explained most curveequations produce very accurate results (only the 9th degreepolynomial falls clearly behind) although the number ofwinsfor equations with similar average fitness may differ due tosmall variations on individual executions As a consequenceof this the question of which model is best suited in this caserequires deeper analysis

From a strictly numerical point of view the 8-term Gaussseries fitted with the IPOP-CMA-ES algorithm produces thebest results so it is the first logical choice In additionwe consider the nature of the biological process modeled

Table 8Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 8-termFourier series

Mean fitness Ranking nWinsSelf-Adaptive DE 944119864 minus 01 126 8IPOP-CMA-ES 919119864 minus 01 191 6GODE 813119864 minus 01 346 4DE 721119864 minus 01 434 2GA 699119864 minus 01 479 0NLLS unconstrained 620119864 minus 01 546 minus2MTS-LS1 416119864 minus 01 729 minus4NLLS constrained 387119864 minus 01 749 minus6Solis and Wets 000119864 + 00 900 minus8

Table 9 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 8-term Fourier series

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueIPOP-CMA-ES 167119864 minus 04 589119864 minus 14

GODE 000119864 + 00 158119864 minus 83

DE 000119864 + 00 109119864 minus 83

GA 000119864 + 00 867119864 minus 82

NLLS unconstrained 000119864 + 00 808119864 minus 83

MTS-LS1 000119864 + 00 643119864 minus 84

NLLS constrained 000119864 + 00 636119864 minus 84

Solis and Wets 000119864 + 00 621119864 minus 84

Table 10 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueIPOP-CMA-ES 167119864 minus 04

radic589119864 minus 14

radic

GODE 000119864 + 00radic

632119864 minus 83radic

DE 000119864 + 00radic

544119864 minus 83radic

GA 000119864 + 00radic

173119864 minus 81radic

NLLS unconstrained 000119864 + 00radic

242119864 minus 82radic

MTS-LS1 000119864 + 00radic

497119864 minus 83radic

NLLS constrained 000119864 + 00radic

497119864 minus 83radic

Solis and Wets 000119864 + 00radic

497119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(AMPA synaptic receptor activation after glutamate release)Although it cannot be modeled analytically (as explained inSection 21) we know that a key parameter of this processis neurotransmitter concentration From Fickrsquos second lawof diffusion we know that this concentration must follow anexponential decay so it is logical to assume that some sortof exponential term has to be present in the AMPA openreceptors curve This favors the 8-term Gauss series and 2-term exponential function as the most biologically feasibleequations since both contain exponential components

To continue our analysis we performed an exploratorygraphical study of the curve solutions provided by thedifferent optimization algorithms Figure 4 shows an exampleof curve-fit for every equation fitted using the corresponding

Abstract and Applied Analysis 11

Table 11 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the8-term Gauss series

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 100 8NLLS unconstrained 973119864 minus 01 223 6NLLS constrained 931119864 minus 01 282 4GA 780119864 minus 01 395 2GODE 524119864 minus 01 500 0Self-Adaptive DE 321119864 minus 01 679 minus3MTS-LS1 323119864 minus 01 685 minus3DE 297119864 minus 01 735 minus6Solis and Wets 000119864 + 00 900 minus8

Table 12 Raw119901 values (IPOP-CMA-ES is the control algorithm) forthe 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12 190119864 minus 83

NLLS constrained 000119864 + 00 133119864 minus 83

GA 000119864 + 00 622119864 minus 84

GODE 000119864 + 00 630119864 minus 84

Self-Adaptive DE 000119864 + 00 628119864 minus 84

MTS-LS1 000119864 + 00 632119864 minus 84

DE 000119864 + 00 629119864 minus 84

Solis and Wets 000119864 + 00 475119864 minus 111

Table 13 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12

radic435119864 minus 83

radic

NLLS constrained 000119864 + 00radic

435119864 minus 83radic

GA 000119864 + 00radic

435119864 minus 83radic

GODE 000119864 + 00radic

435119864 minus 83radic

Self-Adaptive DE 000119864 + 00radic

435119864 minus 83radic

MTS-LS1 000119864 + 00radic

435119864 minus 83radic

DE 000119864 + 00radic

435119864 minus 83radic

Solis and Wets 000119864 + 00radic

380119864 minus 110radic

radicmeans that there are statistical differences with significance level 120572 = 005

best algorithm for each case according to our experiments (Asin the glutamate concentration curve example the synapseconfiguration used here had an AMPA receptor concentra-tion of 1967molecules120583m2 a glutamate transporter con-centration of 11560molecules120583m2 a postsynaptic length of229 nm a total apposition length of 305 nm and a 20 nmwidesynaptic cleft For more details on these parameters pleaserefer to [11]) This study revealed several new findings

(1) The 8-termGauss 4-by-4 degree polynomial rationaland 2-term exponential functions are clearly the bestcandidates for modeling the AMPA open receptorscurve This is consistent with the previously men-tioned numerical results (Table 23)

Table 14 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the2-term exponential function

Mean fitness Ranking nWinsSelf-Adaptive DE 997119864 minus 01 169 8DE 996119864 minus 01 186 6IPOP-CMA-ES 997119864 minus 01 246 4Solis and Wets 732119864 minus 01 484 2MTS-LS1 627119864 minus 01 543 0NLLS unconstrained 620119864 minus 01 573 minus2NLLS constrained 347119864 minus 01 725 minus4GODE 225119864 minus 01 768 minus6GA 165119864 minus 01 806 minus8

Table 15 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 2-term exponential function

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 338119864 minus 01 611119864 minus 10

IPOP-CMA-ES 900119864 minus 06 979119864 minus 54

Solis and Wets 000119864 + 00 631119864 minus 84

MTS-LS1 000119864 + 00 626119864 minus 84

NLLS unconstrained 000119864 + 00 631119864 minus 84

NLLS constrained 000119864 + 00 632119864 minus 84

GODE 000119864 + 00 604119864 minus 84

GA 000119864 + 00 526119864 minus 84

Table 16 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 2-term exponentialfunction

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 338119864 minus 01 611119864 minus 10

radic

IPOP-CMA-ES 180119864 minus 05radic

196119864 minus 53radic

Solis and Wets 000119864 + 00radic

423119864 minus 83radic

MTS-LS1 000119864 + 00radic

423119864 minus 83radic

NLLS unconstrained 000119864 + 00radic

423119864 minus 83radic

NLLS constrained 000119864 + 00radic

423119864 minus 83radic

GODE 000119864 + 00radic

423119864 minus 83radic

GA 000119864 + 00radic

421119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(2) The curve-fitting calculated for the 8-term Gaussseries presents an abnormal perturbation near thecurve peak (see the detail in Figure 4(a)) This seemsto be small enough so the overall fitness is not affectedbut implies that apparently the adjusted curves modelthe synaptic phenomenon in a less realistic way thanother options studied

(3) The 9th degree polynomial produces low quality solu-tions clearly surpassed by the first three equations

(4) The 8-term Fourier series seems to be overfitted tothe training dataThenumerical results are apparentlygood which indicates that the curve points tested

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 9: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

Abstract and Applied Analysis 9G

luta

mat

e con

cent

ratio

n (m

olmiddotL

minus1)

0005

0004

0003

0002

0001

0000

00 02 04 06 08 10

Time (ms)

Figure 3 Example of curve-fit solution for the glutamate concen-tration problem The continuous line shows a solution obtained byfitting the exponential decaymodel proposed with the Self-AdaptiveDE algorithm The dashed line shows the reference data obtainedfrom biochemical simulations For this example 2 = 0990

results with an average precision of 94 the best overallranking and the highest number of wins being the bestalgorithm in all the pairwise comparisons Following IPOP-CMA-ES two DE variants (Self-Adaptive DE and classic DE)take places two and three in the ranking It is importantto note that three of the considered metaheuristics are ableto outperform the NLLS algorithm which was the base ofthe previous work in [11] It is also relevant to the poorperformance of the two considered local searches whichseem to be inadequate for this type of problems Finallythe same statistical validation as for the individual problemshas been followed and the results reported in Tables 21 and22 confirm the superior performance of the IPOP-CMA-ES algorithm obtaining significant differences with all thealgorithms and both statistical tests

34 Biological Interpretation of the Results To understand thebiological implications of this study we have to consider eachof the two curve-fitting problems independently (glutamateconcentration and AMPA open receptors)

341 Glutamate Concentration Curves In the case of theglutamate concentration curves the results shown in Tables2 3 and 4 illustrate two important aspects Firstly the highfitness values obtained with many optimization techniques(most of them around 99) prove that the glutamate dif-fusion equation proposed in Section 21 (equation (9)) isas expected a very accurate model of this phenomenonThis equation therefore can be efficiently used to interpretand predict the glutamate diffusion process given that theadequate means for calculating the 120592 coefficient are usedSecondly selecting the appropriate curve-fitting technique isalso an important aspect in this case since general purposecommonly used algorithms (such as the NLLS used in [11])

Table 3 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the glutamate concentration problem

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 657119864 minus 01 898119864 minus 03

NLLS constrained 000119864 + 00 147119864 minus 70

NLLS unconstrained 000119864 + 00 147119864 minus 70

Table 4 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the glutamate concentra-tion problem

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 100119864 + 00 100119864 + 00

GA 100119864 + 00 100119864 + 00

Solis and Wets 100119864 + 00 100119864 + 00

GODE 100119864 + 00 100119864 + 00

IPOP-CMA-ES 100119864 + 00 100119864 + 00

MTS-LS1 100119864 + 00 539119864 minus 02

NLLS constrained 000119864 + 00radic

118119864 minus 69radic

NLLS unconstrained 000119864 + 00radic

118119864 minus 69radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

are now known to produce suboptimal results Figure 3 showsan example of a glutamate concentration curve solution (Thesynapse configuration used here had an AMPA receptor con-centration of 1967molecules120583m2 a glutamate transporterconcentration of 11560molecules120583m2 a postsynaptic lengthof 229 nm a total apposition length of 305 nm and a 20 nmwide synaptic cleft For more details on these parametersplease refer to [11]) In this example the glutamate initialconcentration was119862

0= 4749times10

minus3molL and its coefficientof diffusion 119863

119860= 033 120583m2ms The curve-fitting solution

produced 120592 = 6577 times 10minus6 resulting in the following

equation

Φ119860 (119905) = 4749 times 10

minus3times 119890minus033times119905(6577times10

minus6) (19)

It is important to remember that this equation is thesolution for one particular synaptic configuration Each oneof the total 500 configurations produced a different glutamateconcentration curve from which a different curve-fittingsolution (different 120592 value) was obtained

342 AMPA Open Receptors Curves The case of the openAMPA receptor curves requires further study Optimizationresults (Tables 5 to 22) show that when using the appropriatemetaheuristic most mathematical equations studied can befitted to the experimental data with excellent numericalresults (all curves can be fitted with more than 90 accuracywith the exception of the 9th degree polynomial) The five

10 Abstract and Applied Analysis

Table 5Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 4-by-4degree polynomial rational function

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 186 8Self-Adaptive DE 999119864 minus 01 208 5DE 999119864 minus 01 209 5GA 935119864 minus 01 486 2MTS-LS1 900119864 minus 01 604 0GODE 918119864 minus 01 604 minus2NLLS constrained 860119864 minus 01 704 minus4Solis and Wets 430119864 minus 01 717 minus6NLLS unconstrained 843119864 minus 01 784 minus8

Table 6 Raw 119901 values (IPOP-CMA-ES is the control algorithm) forthe 4-by-4 degree polynomial rational function

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 214119864 minus 01 203119864 minus 12

DE 190119864 minus 01 325119864 minus 14

GA 000119864 + 00 552119864 minus 84

MTS-LS1 000119864 + 00 631119864 minus 84

GODE 000119864 + 00 540119864 minus 84

NLLS constrained 000119864 + 00 199119864 minus 83

Solis and Wets 000119864 + 00 160119864 minus 86

NLLS unconstrained 000119864 + 00 591119864 minus 84

Table 7 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomialrational function

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 380119864 minus 01 203119864 minus 12

radic

DE 380119864 minus 01 650119864 minus 14radic

GA 000119864 + 00radic

378119864 minus 83radic

MTS-LS1 000119864 + 00radic

378119864 minus 83radic

GODE 000119864 + 00radic

378119864 minus 83radic

NLLS constrained 000119864 + 00radic

596119864 minus 83radic

Solis and Wets 000119864 + 00radic

128119864 minus 85radic

NLLS unconstrained 000119864 + 00radic

378119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

mathematical models studied are compared in Table 23selecting for them the results provided by the best opti-mization technique in each case As explained most curveequations produce very accurate results (only the 9th degreepolynomial falls clearly behind) although the number ofwinsfor equations with similar average fitness may differ due tosmall variations on individual executions As a consequenceof this the question of which model is best suited in this caserequires deeper analysis

From a strictly numerical point of view the 8-term Gaussseries fitted with the IPOP-CMA-ES algorithm produces thebest results so it is the first logical choice In additionwe consider the nature of the biological process modeled

Table 8Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 8-termFourier series

Mean fitness Ranking nWinsSelf-Adaptive DE 944119864 minus 01 126 8IPOP-CMA-ES 919119864 minus 01 191 6GODE 813119864 minus 01 346 4DE 721119864 minus 01 434 2GA 699119864 minus 01 479 0NLLS unconstrained 620119864 minus 01 546 minus2MTS-LS1 416119864 minus 01 729 minus4NLLS constrained 387119864 minus 01 749 minus6Solis and Wets 000119864 + 00 900 minus8

Table 9 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 8-term Fourier series

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueIPOP-CMA-ES 167119864 minus 04 589119864 minus 14

GODE 000119864 + 00 158119864 minus 83

DE 000119864 + 00 109119864 minus 83

GA 000119864 + 00 867119864 minus 82

NLLS unconstrained 000119864 + 00 808119864 minus 83

MTS-LS1 000119864 + 00 643119864 minus 84

NLLS constrained 000119864 + 00 636119864 minus 84

Solis and Wets 000119864 + 00 621119864 minus 84

Table 10 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueIPOP-CMA-ES 167119864 minus 04

radic589119864 minus 14

radic

GODE 000119864 + 00radic

632119864 minus 83radic

DE 000119864 + 00radic

544119864 minus 83radic

GA 000119864 + 00radic

173119864 minus 81radic

NLLS unconstrained 000119864 + 00radic

242119864 minus 82radic

MTS-LS1 000119864 + 00radic

497119864 minus 83radic

NLLS constrained 000119864 + 00radic

497119864 minus 83radic

Solis and Wets 000119864 + 00radic

497119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(AMPA synaptic receptor activation after glutamate release)Although it cannot be modeled analytically (as explained inSection 21) we know that a key parameter of this processis neurotransmitter concentration From Fickrsquos second lawof diffusion we know that this concentration must follow anexponential decay so it is logical to assume that some sortof exponential term has to be present in the AMPA openreceptors curve This favors the 8-term Gauss series and 2-term exponential function as the most biologically feasibleequations since both contain exponential components

To continue our analysis we performed an exploratorygraphical study of the curve solutions provided by thedifferent optimization algorithms Figure 4 shows an exampleof curve-fit for every equation fitted using the corresponding

Abstract and Applied Analysis 11

Table 11 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the8-term Gauss series

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 100 8NLLS unconstrained 973119864 minus 01 223 6NLLS constrained 931119864 minus 01 282 4GA 780119864 minus 01 395 2GODE 524119864 minus 01 500 0Self-Adaptive DE 321119864 minus 01 679 minus3MTS-LS1 323119864 minus 01 685 minus3DE 297119864 minus 01 735 minus6Solis and Wets 000119864 + 00 900 minus8

Table 12 Raw119901 values (IPOP-CMA-ES is the control algorithm) forthe 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12 190119864 minus 83

NLLS constrained 000119864 + 00 133119864 minus 83

GA 000119864 + 00 622119864 minus 84

GODE 000119864 + 00 630119864 minus 84

Self-Adaptive DE 000119864 + 00 628119864 minus 84

MTS-LS1 000119864 + 00 632119864 minus 84

DE 000119864 + 00 629119864 minus 84

Solis and Wets 000119864 + 00 475119864 minus 111

Table 13 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12

radic435119864 minus 83

radic

NLLS constrained 000119864 + 00radic

435119864 minus 83radic

GA 000119864 + 00radic

435119864 minus 83radic

GODE 000119864 + 00radic

435119864 minus 83radic

Self-Adaptive DE 000119864 + 00radic

435119864 minus 83radic

MTS-LS1 000119864 + 00radic

435119864 minus 83radic

DE 000119864 + 00radic

435119864 minus 83radic

Solis and Wets 000119864 + 00radic

380119864 minus 110radic

radicmeans that there are statistical differences with significance level 120572 = 005

best algorithm for each case according to our experiments (Asin the glutamate concentration curve example the synapseconfiguration used here had an AMPA receptor concentra-tion of 1967molecules120583m2 a glutamate transporter con-centration of 11560molecules120583m2 a postsynaptic length of229 nm a total apposition length of 305 nm and a 20 nmwidesynaptic cleft For more details on these parameters pleaserefer to [11]) This study revealed several new findings

(1) The 8-termGauss 4-by-4 degree polynomial rationaland 2-term exponential functions are clearly the bestcandidates for modeling the AMPA open receptorscurve This is consistent with the previously men-tioned numerical results (Table 23)

Table 14 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the2-term exponential function

Mean fitness Ranking nWinsSelf-Adaptive DE 997119864 minus 01 169 8DE 996119864 minus 01 186 6IPOP-CMA-ES 997119864 minus 01 246 4Solis and Wets 732119864 minus 01 484 2MTS-LS1 627119864 minus 01 543 0NLLS unconstrained 620119864 minus 01 573 minus2NLLS constrained 347119864 minus 01 725 minus4GODE 225119864 minus 01 768 minus6GA 165119864 minus 01 806 minus8

Table 15 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 2-term exponential function

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 338119864 minus 01 611119864 minus 10

IPOP-CMA-ES 900119864 minus 06 979119864 minus 54

Solis and Wets 000119864 + 00 631119864 minus 84

MTS-LS1 000119864 + 00 626119864 minus 84

NLLS unconstrained 000119864 + 00 631119864 minus 84

NLLS constrained 000119864 + 00 632119864 minus 84

GODE 000119864 + 00 604119864 minus 84

GA 000119864 + 00 526119864 minus 84

Table 16 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 2-term exponentialfunction

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 338119864 minus 01 611119864 minus 10

radic

IPOP-CMA-ES 180119864 minus 05radic

196119864 minus 53radic

Solis and Wets 000119864 + 00radic

423119864 minus 83radic

MTS-LS1 000119864 + 00radic

423119864 minus 83radic

NLLS unconstrained 000119864 + 00radic

423119864 minus 83radic

NLLS constrained 000119864 + 00radic

423119864 minus 83radic

GODE 000119864 + 00radic

423119864 minus 83radic

GA 000119864 + 00radic

421119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(2) The curve-fitting calculated for the 8-term Gaussseries presents an abnormal perturbation near thecurve peak (see the detail in Figure 4(a)) This seemsto be small enough so the overall fitness is not affectedbut implies that apparently the adjusted curves modelthe synaptic phenomenon in a less realistic way thanother options studied

(3) The 9th degree polynomial produces low quality solu-tions clearly surpassed by the first three equations

(4) The 8-term Fourier series seems to be overfitted tothe training dataThenumerical results are apparentlygood which indicates that the curve points tested

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 10: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

10 Abstract and Applied Analysis

Table 5Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 4-by-4degree polynomial rational function

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 186 8Self-Adaptive DE 999119864 minus 01 208 5DE 999119864 minus 01 209 5GA 935119864 minus 01 486 2MTS-LS1 900119864 minus 01 604 0GODE 918119864 minus 01 604 minus2NLLS constrained 860119864 minus 01 704 minus4Solis and Wets 430119864 minus 01 717 minus6NLLS unconstrained 843119864 minus 01 784 minus8

Table 6 Raw 119901 values (IPOP-CMA-ES is the control algorithm) forthe 4-by-4 degree polynomial rational function

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 214119864 minus 01 203119864 minus 12

DE 190119864 minus 01 325119864 minus 14

GA 000119864 + 00 552119864 minus 84

MTS-LS1 000119864 + 00 631119864 minus 84

GODE 000119864 + 00 540119864 minus 84

NLLS constrained 000119864 + 00 199119864 minus 83

Solis and Wets 000119864 + 00 160119864 minus 86

NLLS unconstrained 000119864 + 00 591119864 minus 84

Table 7 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 4-by-4 degree polynomialrational function

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 380119864 minus 01 203119864 minus 12

radic

DE 380119864 minus 01 650119864 minus 14radic

GA 000119864 + 00radic

378119864 minus 83radic

MTS-LS1 000119864 + 00radic

378119864 minus 83radic

GODE 000119864 + 00radic

378119864 minus 83radic

NLLS constrained 000119864 + 00radic

596119864 minus 83radic

Solis and Wets 000119864 + 00radic

128119864 minus 85radic

NLLS unconstrained 000119864 + 00radic

378119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

mathematical models studied are compared in Table 23selecting for them the results provided by the best opti-mization technique in each case As explained most curveequations produce very accurate results (only the 9th degreepolynomial falls clearly behind) although the number ofwinsfor equations with similar average fitness may differ due tosmall variations on individual executions As a consequenceof this the question of which model is best suited in this caserequires deeper analysis

From a strictly numerical point of view the 8-term Gaussseries fitted with the IPOP-CMA-ES algorithm produces thebest results so it is the first logical choice In additionwe consider the nature of the biological process modeled

Table 8Meanfitness average ranking andnumber ofwins (nWins)in pair-wise comparisons with the other algorithms on the 8-termFourier series

Mean fitness Ranking nWinsSelf-Adaptive DE 944119864 minus 01 126 8IPOP-CMA-ES 919119864 minus 01 191 6GODE 813119864 minus 01 346 4DE 721119864 minus 01 434 2GA 699119864 minus 01 479 0NLLS unconstrained 620119864 minus 01 546 minus2MTS-LS1 416119864 minus 01 729 minus4NLLS constrained 387119864 minus 01 749 minus6Solis and Wets 000119864 + 00 900 minus8

Table 9 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 8-term Fourier series

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueIPOP-CMA-ES 167119864 minus 04 589119864 minus 14

GODE 000119864 + 00 158119864 minus 83

DE 000119864 + 00 109119864 minus 83

GA 000119864 + 00 867119864 minus 82

NLLS unconstrained 000119864 + 00 808119864 minus 83

MTS-LS1 000119864 + 00 643119864 minus 84

NLLS constrained 000119864 + 00 636119864 minus 84

Solis and Wets 000119864 + 00 621119864 minus 84

Table 10 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 8-term Fourier series

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueIPOP-CMA-ES 167119864 minus 04

radic589119864 minus 14

radic

GODE 000119864 + 00radic

632119864 minus 83radic

DE 000119864 + 00radic

544119864 minus 83radic

GA 000119864 + 00radic

173119864 minus 81radic

NLLS unconstrained 000119864 + 00radic

242119864 minus 82radic

MTS-LS1 000119864 + 00radic

497119864 minus 83radic

NLLS constrained 000119864 + 00radic

497119864 minus 83radic

Solis and Wets 000119864 + 00radic

497119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(AMPA synaptic receptor activation after glutamate release)Although it cannot be modeled analytically (as explained inSection 21) we know that a key parameter of this processis neurotransmitter concentration From Fickrsquos second lawof diffusion we know that this concentration must follow anexponential decay so it is logical to assume that some sortof exponential term has to be present in the AMPA openreceptors curve This favors the 8-term Gauss series and 2-term exponential function as the most biologically feasibleequations since both contain exponential components

To continue our analysis we performed an exploratorygraphical study of the curve solutions provided by thedifferent optimization algorithms Figure 4 shows an exampleof curve-fit for every equation fitted using the corresponding

Abstract and Applied Analysis 11

Table 11 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the8-term Gauss series

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 100 8NLLS unconstrained 973119864 minus 01 223 6NLLS constrained 931119864 minus 01 282 4GA 780119864 minus 01 395 2GODE 524119864 minus 01 500 0Self-Adaptive DE 321119864 minus 01 679 minus3MTS-LS1 323119864 minus 01 685 minus3DE 297119864 minus 01 735 minus6Solis and Wets 000119864 + 00 900 minus8

Table 12 Raw119901 values (IPOP-CMA-ES is the control algorithm) forthe 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12 190119864 minus 83

NLLS constrained 000119864 + 00 133119864 minus 83

GA 000119864 + 00 622119864 minus 84

GODE 000119864 + 00 630119864 minus 84

Self-Adaptive DE 000119864 + 00 628119864 minus 84

MTS-LS1 000119864 + 00 632119864 minus 84

DE 000119864 + 00 629119864 minus 84

Solis and Wets 000119864 + 00 475119864 minus 111

Table 13 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12

radic435119864 minus 83

radic

NLLS constrained 000119864 + 00radic

435119864 minus 83radic

GA 000119864 + 00radic

435119864 minus 83radic

GODE 000119864 + 00radic

435119864 minus 83radic

Self-Adaptive DE 000119864 + 00radic

435119864 minus 83radic

MTS-LS1 000119864 + 00radic

435119864 minus 83radic

DE 000119864 + 00radic

435119864 minus 83radic

Solis and Wets 000119864 + 00radic

380119864 minus 110radic

radicmeans that there are statistical differences with significance level 120572 = 005

best algorithm for each case according to our experiments (Asin the glutamate concentration curve example the synapseconfiguration used here had an AMPA receptor concentra-tion of 1967molecules120583m2 a glutamate transporter con-centration of 11560molecules120583m2 a postsynaptic length of229 nm a total apposition length of 305 nm and a 20 nmwidesynaptic cleft For more details on these parameters pleaserefer to [11]) This study revealed several new findings

(1) The 8-termGauss 4-by-4 degree polynomial rationaland 2-term exponential functions are clearly the bestcandidates for modeling the AMPA open receptorscurve This is consistent with the previously men-tioned numerical results (Table 23)

Table 14 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the2-term exponential function

Mean fitness Ranking nWinsSelf-Adaptive DE 997119864 minus 01 169 8DE 996119864 minus 01 186 6IPOP-CMA-ES 997119864 minus 01 246 4Solis and Wets 732119864 minus 01 484 2MTS-LS1 627119864 minus 01 543 0NLLS unconstrained 620119864 minus 01 573 minus2NLLS constrained 347119864 minus 01 725 minus4GODE 225119864 minus 01 768 minus6GA 165119864 minus 01 806 minus8

Table 15 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 2-term exponential function

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 338119864 minus 01 611119864 minus 10

IPOP-CMA-ES 900119864 minus 06 979119864 minus 54

Solis and Wets 000119864 + 00 631119864 minus 84

MTS-LS1 000119864 + 00 626119864 minus 84

NLLS unconstrained 000119864 + 00 631119864 minus 84

NLLS constrained 000119864 + 00 632119864 minus 84

GODE 000119864 + 00 604119864 minus 84

GA 000119864 + 00 526119864 minus 84

Table 16 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 2-term exponentialfunction

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 338119864 minus 01 611119864 minus 10

radic

IPOP-CMA-ES 180119864 minus 05radic

196119864 minus 53radic

Solis and Wets 000119864 + 00radic

423119864 minus 83radic

MTS-LS1 000119864 + 00radic

423119864 minus 83radic

NLLS unconstrained 000119864 + 00radic

423119864 minus 83radic

NLLS constrained 000119864 + 00radic

423119864 minus 83radic

GODE 000119864 + 00radic

423119864 minus 83radic

GA 000119864 + 00radic

421119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(2) The curve-fitting calculated for the 8-term Gaussseries presents an abnormal perturbation near thecurve peak (see the detail in Figure 4(a)) This seemsto be small enough so the overall fitness is not affectedbut implies that apparently the adjusted curves modelthe synaptic phenomenon in a less realistic way thanother options studied

(3) The 9th degree polynomial produces low quality solu-tions clearly surpassed by the first three equations

(4) The 8-term Fourier series seems to be overfitted tothe training dataThenumerical results are apparentlygood which indicates that the curve points tested

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 11: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

Abstract and Applied Analysis 11

Table 11 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the8-term Gauss series

Mean fitness Ranking nWinsIPOP-CMA-ES 999119864 minus 01 100 8NLLS unconstrained 973119864 minus 01 223 6NLLS constrained 931119864 minus 01 282 4GA 780119864 minus 01 395 2GODE 524119864 minus 01 500 0Self-Adaptive DE 321119864 minus 01 679 minus3MTS-LS1 323119864 minus 01 685 minus3DE 297119864 minus 01 735 minus6Solis and Wets 000119864 + 00 900 minus8

Table 12 Raw119901 values (IPOP-CMA-ES is the control algorithm) forthe 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12 190119864 minus 83

NLLS constrained 000119864 + 00 133119864 minus 83

GA 000119864 + 00 622119864 minus 84

GODE 000119864 + 00 630119864 minus 84

Self-Adaptive DE 000119864 + 00 628119864 minus 84

MTS-LS1 000119864 + 00 632119864 minus 84

DE 000119864 + 00 629119864 minus 84

Solis and Wets 000119864 + 00 475119864 minus 111

Table 13 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for the 8-term Gauss series

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueNLLS unconstrained 134119864 minus 12

radic435119864 minus 83

radic

NLLS constrained 000119864 + 00radic

435119864 minus 83radic

GA 000119864 + 00radic

435119864 minus 83radic

GODE 000119864 + 00radic

435119864 minus 83radic

Self-Adaptive DE 000119864 + 00radic

435119864 minus 83radic

MTS-LS1 000119864 + 00radic

435119864 minus 83radic

DE 000119864 + 00radic

435119864 minus 83radic

Solis and Wets 000119864 + 00radic

380119864 minus 110radic

radicmeans that there are statistical differences with significance level 120572 = 005

best algorithm for each case according to our experiments (Asin the glutamate concentration curve example the synapseconfiguration used here had an AMPA receptor concentra-tion of 1967molecules120583m2 a glutamate transporter con-centration of 11560molecules120583m2 a postsynaptic length of229 nm a total apposition length of 305 nm and a 20 nmwidesynaptic cleft For more details on these parameters pleaserefer to [11]) This study revealed several new findings

(1) The 8-termGauss 4-by-4 degree polynomial rationaland 2-term exponential functions are clearly the bestcandidates for modeling the AMPA open receptorscurve This is consistent with the previously men-tioned numerical results (Table 23)

Table 14 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the2-term exponential function

Mean fitness Ranking nWinsSelf-Adaptive DE 997119864 minus 01 169 8DE 996119864 minus 01 186 6IPOP-CMA-ES 997119864 minus 01 246 4Solis and Wets 732119864 minus 01 484 2MTS-LS1 627119864 minus 01 543 0NLLS unconstrained 620119864 minus 01 573 minus2NLLS constrained 347119864 minus 01 725 minus4GODE 225119864 minus 01 768 minus6GA 165119864 minus 01 806 minus8

Table 15 Raw 119901 values (Self-Adaptive DE is the control algorithm)for the 2-term exponential function

Self-Adaptive DE versus Friedman 119901 value Wilcoxon 119901 valueDE 338119864 minus 01 611119864 minus 10

IPOP-CMA-ES 900119864 minus 06 979119864 minus 54

Solis and Wets 000119864 + 00 631119864 minus 84

MTS-LS1 000119864 + 00 626119864 minus 84

NLLS unconstrained 000119864 + 00 631119864 minus 84

NLLS constrained 000119864 + 00 632119864 minus 84

GODE 000119864 + 00 604119864 minus 84

GA 000119864 + 00 526119864 minus 84

Table 16 Corrected 119901 values according to Holmrsquos method (Self-Adaptive DE is the control algorithm) for the 2-term exponentialfunction

Self-Adaptive DE versus Friedmanlowast119901 value Wilcoxonlowast119901 valueDE 338119864 minus 01 611119864 minus 10

radic

IPOP-CMA-ES 180119864 minus 05radic

196119864 minus 53radic

Solis and Wets 000119864 + 00radic

423119864 minus 83radic

MTS-LS1 000119864 + 00radic

423119864 minus 83radic

NLLS unconstrained 000119864 + 00radic

423119864 minus 83radic

NLLS constrained 000119864 + 00radic

423119864 minus 83radic

GODE 000119864 + 00radic

423119864 minus 83radic

GA 000119864 + 00radic

421119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

(2) The curve-fitting calculated for the 8-term Gaussseries presents an abnormal perturbation near thecurve peak (see the detail in Figure 4(a)) This seemsto be small enough so the overall fitness is not affectedbut implies that apparently the adjusted curves modelthe synaptic phenomenon in a less realistic way thanother options studied

(3) The 9th degree polynomial produces low quality solu-tions clearly surpassed by the first three equations

(4) The 8-term Fourier series seems to be overfitted tothe training dataThenumerical results are apparentlygood which indicates that the curve points tested

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 12: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

12 Abstract and Applied Analysis

40

35

30

25

20

15

10

5

0

minus5

0 1 2 3 4 5

Time (ms)

39

38

37

36

35

006 012

Ope

n A

MPA

rece

ptor

s

(a) 8-term Gauss series (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(b) 4-by-4 deg polynomial rational (2 = 0999)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(c) 2-term exp function (2 = 0997)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(d) 9th degree polynomial (2 = 0836)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(e) 8-term Fourier series (2 = 0976)

0 1 2 3 4 5

Time (ms)

40

35

30

25

20

15

10

5

0

minus5

Ope

n A

MPA

rece

ptor

s

(f) 8-term Fourier series (tested points only)

Figure 4 Examples of curve-fit solutions for theAMPA open receptors problem All figures show curve-fitting solutions for the same synapticcurve Continuous lines show curve-fitting solutions obtained using the indicated equation fitted by the best optimization algorithm fromour experiments Dashed lines show reference data obtained from simulation (a) also shows a detailed view of the curve peak (e) shows anextreme case of overfitting in the form of a fast-oscillating periodic function (f) shows the sampled points tested in the fitness evaluation ofthe 8-term Fourier series to better illustrate the overfitting problem

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 13: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

Abstract and Applied Analysis 13

Table 17 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms on the9th degree polynomial

Mean fitness Ranking nWinsNLLS unconstrained 843119864 minus 01 244 8Self-Adaptive DE 842119864 minus 01 250 5DE 842119864 minus 01 250 5NLLS constrained 802119864 minus 01 272 2IPOP-CMA-ES 826119864 minus 01 495 0GA 000119864 + 00 748 minus5Solis and Wets 000119864 + 00 748 minus5MTS-LS1 000119864 + 00 748 minus5GODE 000119864 + 00 748 minus5

Table 18 Raw 119901 values (NLLS unconstrained is the controlalgorithm) for the 9th degree polynomial

NLLS unconstrained versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 742119864 minus 01 652119864 minus 05

DE 742119864 minus 01 652119864 minus 05

NLLS constrained 108119864 minus 01 891119864 minus 06

IPOP-CMA-ES 000119864 + 00 534119864 minus 84

GA 000119864 + 00 592119864 minus 84

Solis and Wets 000119864 + 00 592119864 minus 84

MTS-LS1 000119864 + 00 592119864 minus 84

GODE 000119864 + 00 592119864 minus 84

Table 19 Corrected 119901 values according to Holmrsquos method (NLLSunconstrained is the control algorithm) for the 9th degree polyno-mial

NLLS unconstrained versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 100119864 + 00 130119864 minus 04

radic

DE 100119864 + 00 130119864 minus 04radic

NLLS constrained 325119864 minus 01 267119864 minus 05radic

IPOP-CMA-ES 000119864 + 00radic

427119864 minus 83radic

GA 000119864 + 00radic

427119864 minus 83radic

Solis and Wets 000119864 + 00radic

427119864 minus 83radic

MTS-LS1 000119864 + 00radic

427119864 minus 83radic

GODE 000119864 + 00radic

427119864 minus 83radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

when calculating the fitness value are very close tothe simulation ones (Figure 4(f)) This is due to thefact that when evaluating the fitness value not all datapoints obtained during simulation are usedThe high-resolution simulations on which this work is basedproduce 10000-point curves per simulation Testingall of these points would make fitness calculation anextremely heavy computational task To avoid thissimulation data were sampled before curve-fittingreducing the number of points per curve to 100 Anidentical sampling procedure was performed in [11]

Table 20 Mean fitness average ranking and number of wins(nWins) in pair-wise comparisons with the other algorithms for allthe models

Mean fitness Ranking nWinsIPOP-CMA-ES 948119864 minus 01 244 8Self-Adaptive DE 821119864 minus 01 286 6DE 771119864 minus 01 362 4NLLS unconstrained 780119864 minus 01 474 2NLLS constrained 665119864 minus 01 546 0GA 516119864 minus 01 583 minus2GODE 496119864 minus 01 593 minus4MTS-LS1 453119864 minus 01 661 minus6Solis and Wets 232119864 minus 01 750 minus8

Table 21 Raw119901 values (IPOP-CMA-ES is the control algorithm) forall the models

IPOP-CMA-ES versus Friedman 119901 value Wilcoxon 119901 valueSelf-Adaptive DE 415119864 minus 08 501119864 minus 05

DE 000119864 + 00 216119864 minus 86

NLLS unconstrained 000119864 + 00 481119864 minus 306

NLLS constrained 000119864 + 00 000119864 + 00

GA 000119864 + 00 000119864 + 00

GODE 000119864 + 00 000119864 + 00

MTS-LS1 000119864 + 00 000119864 + 00

Solis and Wets 000119864 + 00 000119864 + 00

Table 22 Corrected 119901 values according to Holmrsquos method (IPOP-CMA-ES is the control algorithm) for all the models

IPOP-CMA-ES versus Friedmanlowast119901 value Wilcoxonlowast119901 valueSelf-Adaptive DE 415119864 minus 08

radic501119864 minus 05

radic

DE 000119864 + 00radic

431119864 minus 86radic

NLLS unconstrained 000119864 + 00radic

144119864 minus 305radic

NLLS constrained 000119864 + 00radic

000119864 + 00radic

GA 000119864 + 00radic

000119864 + 00radic

GODE 000119864 + 00radic

000119864 + 00radic

MTS-LS1 000119864 + 00radic

000119864 + 00radic

Solis and Wets 000119864 + 00radic

000119864 + 00radic

radicmeans that there are statistical differences with significance level 120572 = 005lowastmeans these are adjusted 119901-values according to Holmrsquos method

for the same reasons In the case of the 8-termFourier series the optimization algorithms were ableto find rapidly oscillating curves that passed veryclose to most test points The shapes of the resultingcurves however were completely different from realphenomena observed (Figure 4(e)) This indicatesthat the 8-term Fourier series is not an appropriatemodel for the AMPA open receptors curve as it caneasily produce this kind of false-positive overfittedsolutions

Taking all this into consideration we can try to selectthe appropriate equation for theAMPA open receptors curveBetween the best three the biological relationship with

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 14: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

14 Abstract and Applied Analysis

Table 23 Average ranking and number of wins (nWins) in pair-wise comparisons for the best algorithm for each AMPA model

Mean fitness Ranking nWins8-term Gauss series 999119864 minus 01 163 44-by-4 degree polynomial rational function 999119864 minus 01 167 22-term exponential function 997119864 minus 01 273 08-term Fourier series 944119864 minus 01 399 minus29th degree polynomial 843119864 minus 01 497 minus4

the neurotransmitter concentration (following an exponen-tial decay) makes us favor the 8-term Gauss and 2-termexponential functions over the 4-by-4 polynomial rationalThese two present almost identical numerical results with aslightly better performance in the case of the 8-term Gaussseries The small perturbation observed in 8-term Gaussseries curves however indicates this equation models thenatural phenomenon in a less realistic way The simplicityof the 2-term exponential function has to be consideredas well It is a model with 4 parameters in contrast withthe 25 parameters of the 8-term Gauss series For all thisreason we suggest the 2-term exponential function as the bestapproximation

In the example shown in Figure 4 the Self-Adaptive DEmetaheuristic calculated a curve-fitting solution for the 2-term exponential function with 119886 = 51749 119887 = minus2716119888 = minus53540 and 119889 = minus30885 resulting in the followingequation (also plotted in Figure 4(c))

119860119872119875119860open (119905) = 51749 times 119890minus2716times119905

minus 53540

times 119890minus30885times119905

(20)

As in the case of the glutamate concentration curves itis important to remember that this equation is the solutionfor one particular synaptic configuration Each one of thetotal 500 configurations produced a different AMPA openreceptors curve from which a different curve-fitting solution(different values of 119886 119887 119888 and 119889) was obtained

4 Conclusions

In this paper we have compared several metaheuristics inthe problem of fitting curves to match the experimentaldata coming from a synapse simulation process Each oneof the 500 synaptic configurations used represents a differentsynapse and produces two separate problem curves that needto be treated independently (glutamate concentration andAMPA open receptors) Several different functions have beenused and the selected metaheuristics have tried to find thebest fitting to each of them We have treated all curves fromthe same problem in our simulation as a single data set tryingto find the best combination of function andmetaheuristic forthe entire problem Furthermore in order to show that moreadvanced metaheuristics can improve currently used state-of-the-art techniques such as theNLLS algorithmwhichwasused in [11] we have conducted an extensive experimentationthat yields significant results First it has been proved that

several metaheuristics systematically outperform the state-of-the-art NLLS algorithm especially in the case of the IPOP-CMA-ES which obtained the best overall performanceSecond it has been shown that some commonly used localsearches that obtain excellent results in other problems arenot suitable for this task which implies that the curve fittingalgorithmmust be carefully chosen and thus this comparisoncan be of great value for other researchers Finally thedifferent performance of the algorithms opens a new researchline thatwill focus on finding combinations of algorithms thatcould obtain even better results by exploiting the features ofseveral metaheuristics

From a biological point of view we have observed howseveral curve models are able to very accurately fit a set oftime-series curves obtained from the simulation of synapticprocesses We have studied two main aspects of synap-tic behavior (i) changes in neurotransmitter concentrationwithin the synaptic cleft after release and (ii) synaptic receptoractivation We have proposed an analytically derived curvemodel for the former and a set of possible curve equations forthe latter In the first scenario (changes in neurotransmitterconcentration) the optimization results validate our proposedcurvemodel In the second one (synaptic receptor activation)several possible curve models attained accurate results Amore detailed inspection however revealed that some ofthese models severely overfitted the experimental data sonot all models were considered acceptable The best behaviorwas observed with the exponential models which suggeststhat exponential decay processes are key elements of synapseactivationThese results have several interesting implicationsthat can have an application in future neuroscience researchFirstly as explained we have identified the best possiblecandidate equations for both synaptic curves studied Futurestudies of synaptic behavior can be modeled over theseequations further expanding the understanding of aspectssuch as synaptic geometry and neurotransmitter vesicle sizeOur work demonstrates the validity of these equationsSecondly we have identified the appropriate optimizationtechniques to be used when constructing these models Aswe have demonstrated not all curve-fitting alternatives areadequate so selecting the wrong one could have a strongimpact on the validity of any derived research Our worksuccessfully resolves this issue again setting the basis forupcoming advances in synaptic behavior analysis

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 15: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

Abstract and Applied Analysis 15

Acknowledgments

This work was financed by the Spanish Ministry of Economy(NAVAN project) and supported by the Cajal Blue BrainProject The authors thankfully acknowledge the computerresources technical expertise and assistance provided bythe Supercomputing and Visualization Center of Madrid(CeSViMa) A LaTorre gratefully acknowledges the supportof the SpanishMinistry of Science and Innovation (MICINN)for its funding throughout the Juan de la Cierva Program

References

[1] K Fuxe A DahlstromM Hoistad et al ldquoFrom the Golgi-Cajalmapping to the transmitter-based characterization of the neu-ronal networks leading to two modes of brain communicationwiring and volume transmissionrdquo Brain Research Reviews vol55 no 1 pp 17ndash54 2007

[2] E Sykova and C Nicholson ldquoDiffusion in brain extracellularspacerdquo Physiological Reviews vol 88 no 4 pp 1277ndash1340 2008

[3] D A Rusakov L P Savtchenko K Zheng and J M HenleyldquoShaping the synaptic signal molecular mobility inside andoutside the cleftrdquoTrends in Neurosciences vol 34 no 7 pp 359ndash369 2011

[4] J Boucher H Kroger and A Sık ldquoRealistic modelling ofreceptor activation in hippocampal excitatory synapses anal-ysis of multivesicular release release location temperature andsynaptic cross-talkrdquo Brain Structure amp Function vol 215 no 1pp 49ndash65 2010

[5] M Renner Y Domanov F Sandrin I Izeddin P Bassereau andA Triller ldquoLateral diffusion on tubular membranes quantifica-tion of measurements biasrdquo PLoS ONE vol 6 no 9 Article IDe25731 2011

[6] J R Stiles and T M Bartol ldquoMonte carlo methods for simu-lating realistic synaptic microphysiology using MCellrdquo in Com-putational Neuroscience RealisticModeling for Experimentalistspp 87ndash127 CRC Press 2001

[7] R A Kerr T M Bartol B Kaminsky et al ldquoFast MonteCarlo simulation methods for biological reaction-diffusionsystems in solution and on surfacesrdquo SIAM Journal on ScientificComputing vol 30 no 6 pp 3126ndash3149 2008

[8] S J Plimpton and A Slepoy ldquoMicrobial cell modeling via react-ing diffusive particlesrdquo Journal of Physics Conference Series vol16 no 1 pp 305ndash309 2005

[9] S S Andrews and D Bray ldquoStochastic simulation of chemicalreactions with spatial resolution and single molecule detailrdquoPhysical Biology vol 1 no 3-4 pp 137ndash151 2004

[10] S S Andrews N J Addy R Brent and A P Arkin ldquoDetailedsimulations of cell biology with smoldyn 21rdquo PLoS Computa-tional Biology vol 6 no 3 2010

[11] J Montes E Gomez A Merchan-Perez J DeFelipe and J-M Pena ldquoA machine learning method for the prediction ofreceptor activation in the simulation of synapsesrdquo PLoS ONEvol 8 no 7 Article ID e68888 pp 1ndash14 2013

[12] A LaTorre S Muelas and J M Pena ldquoA comprehensive com-parison of large scale global optimizersrdquo Information Sciences2014

[13] C Sequeira F Sanchez-Quesada M Sancho I Hidalgo and TOrtiz ldquoA genetic algorithm approach for localization of deepsources in MEGrdquo Physica Scripta vol T118 pp 140ndash142 2005

[14] E Cuevas D Zaldivar and M Perez-Cisneros ldquoA novelmulti-threshold segmentation approach based on differentialevolution optimizationrdquo Expert Systems with Applications vol37 no 7 pp 5265ndash5271 2010

[15] A Ochoa M Tejera andM Soto ldquoA fitness function model fordetecting ellipses with estimation of distribution algorithmsrdquo inProceedings of the 6th IEEE Congress on Evolutionary Computa-tion (CEC rsquo10) pp 1ndash8 July 2010

[16] A LaTorre S Muelas J-M Pena R Santana A Merchan-Perez and J-R Rodrıguez ldquoA differential evolution algorithmfor the detection of synaptic vesiclesrdquo in Proceedings of the IEEECongress of Evolutionary Computation (CEC rsquo11) pp 1687ndash1694IEEE New Orleans La USA June 2011

[17] R Santana S Muelas A LaTorre and J M Pena ldquoA directoptimization approach to the P300 spellerrdquo in Proceedings of the13th Annual Genetic and Evolutionary Computation Conference(GECCO rsquo11) pp 1747ndash1754 Dublin Ireland July 2011

[18] S Muelas J M Pena V Robles K Muzhetskaya A LaTorreand P de Miguel ldquoOptimizing the design of composite panelsusing an improved genetic algorithmrdquo in Proceedings of theInternational Conference on Engineering Optimization (EngOptrsquo08) J Herskovits A Canelas H Cortes and M ArozteguiEds COPPEUFRJ June 2008

[19] WWang Z Xiang and X Xu ldquoSelf-adaptive differential evolu-tion and its application to job-shop schedulingrdquo in Proceedingsof the 7th International Conference on System Simulation andScientific ComputingmdashAsia Simulation Conference (ICSC rsquo08)pp 820ndash826 IEEE Beijing China October 2008

[20] S M Elsayed R A Sarker and D L Essam ldquoGA with a newmulti-parent crossover for solving IEEE-CEC2011 competitionproblemsrdquo in Proceedings of the IEEE Congress of EvolutionaryComputation (CEC rsquo11) pp 1034ndash1040 June 2011

[21] A LaTorre S Muelas and J M Pena ldquoBenchmarking a hybridDERHC algorithm on real world problemsrdquo in Proceedings ofthe IEEE Congress on Evolutionary Computation (CEC rsquo11) pp1027ndash1033 IEEE New Orleans La USA June 2011

[22] S N Parragh K F Doerner and R F Hartl ldquoVariableneighborhood search for the dial-a-ride problemrdquo Computersamp Operations Research vol 37 no 6 pp 1129ndash1138 2010

[23] S Muelas A Latorre and J-M Pena ldquoA variable neighborhoodsearch algorithm for the optimization of a dial-a-ride problemin a large cityrdquo Expert Systems with Applications vol 40 no 14pp 5516ndash5531 2013

[24] J C Dias P Machado D C Silva and P H Abreu ldquoAn invertedant colony optimization approach to trafficrdquo Engineering Appli-cations of Artificial Intelligence vol 36 pp 122ndash133 2014

[25] S Zhang C Lee H Chan K Choy and Z Wu ldquoSwarmintelligence applied in green logistics a literature reviewrdquoEngineeringApplications of Artificial Intelligence vol 37 pp 154ndash169 2015

[26] A Galvez A Iglesias and J Puig-Pey ldquoIterative two-stepgenetic-algorithm-based method for efficient polynomial B-spline surface reconstructionrdquo Information Sciences vol 182 pp56ndash76 2012

[27] J Yang F Liu X Tao and X Wang ldquoThe application of evolu-tionary algorithm in B-spline curve fittingrdquo in Proceedings of the9th International Conference on Fuzzy Systems and KnowledgeDiscovery (FSKD rsquo12) pp 2809ndash2812 Sichuan ChinaMay 2012

[28] A Galvez and A Iglesias ldquoA new iterative mutually coupledhybrid GA-PSO approach for curve fitting in manufacturingrdquoApplied Soft Computing Journal vol 13 no 3 pp 1491ndash15042013

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 16: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

16 Abstract and Applied Analysis

[29] L Zhao J Jiang C Song L Bao and J Gao ldquoParameter opti-mization for Bezier curve fitting based on genetic algorithmrdquoin Advances in Swarm Intelligence vol 7928 of Lecture Notes inComputer Science pp 451ndash458 Springer Berlin Germany 2013

[30] Centro de Supercomputacion y Visualizacion de Madrid(CeSViMa) 2014 httpwwwcesvimaupmes

[31] P Jonas G Major and B Sakmann ldquoQuantal components ofunitary EPSCs at the mossy fibre synapse on CA3 pyramidalcells of rat hippocampusrdquo The Journal of Physiology vol 472no 1 pp 615ndash663 1993

[32] KM Franks TM Bartol Jr andT J Sejnowski ldquoAmonte carlomodel reveals independent signaling at central glutamatergicsynapsesrdquo Biophysical Journal vol 83 no 5 pp 2333ndash23482002

[33] D A Rusakov and D M Kullmann ldquoExtrasynaptic gluta-mate diffusion in the hippocampus ultrastructural constraintsuptake and receptor activationrdquo The Journal of Neurosciencevol 18 no 9 pp 3158ndash3170 1998

[34] K Zheng A Scimemi and D A Rusakov ldquoReceptor actions ofsynaptically released glutamate the role of transporters on thescale from nanometers to micronsrdquo Biophysical Journal vol 95no 10 pp 4584ndash4596 2008

[35] A Momiyama R A Silver M Hausser et al ldquoThe densityof AMPA receptors activated by a transmitter quantum at theclimbing fibremdashPurkinje cell synapse in immature ratsrdquo TheJournal of Physiology vol 549 no 1 pp 75ndash92 2003

[36] D M Bates and D G Watts Nonlinear Regression Analysisand Its Applications Wiley Series in Probability and Math-ematical Statistics Applied Probability and Statistics Wiley-Interscience New York NY USA 1988

[37] D M Bates and J M Chambers ldquoNonlinear modelsrdquo inStatistical Models S J M Chambers and T J Hastie EdsWadsworth amp Brookscole 1992

[38] MathWorks MATLAB and Statistics Toolbox Release 2013aMathWorks 2014 httpwwwmathworkscomsupportsysreqsv-r2013a

[39] R Core Team R A Language and Environment for StatisticalComputing R Foundation for Statistical Computing 2011

[40] J H Holland Adaptation in Natural and Artificial Systems TheUniversity of Michigan Press Ann Arbor Mich USA 1975

[41] F Herrera and M Lozano ldquoGradual distributed real-codedgenetic algorithmsrdquo IEEE Transactions on Evolutionary Compu-tation vol 4 no 1 pp 43ndash63 2000

[42] R Storn and K V Price ldquoDifferential evolutionmdasha simpleand efficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997

[43] J Brest S Greiner B Boskovic M Mernik and V ZumerldquoSelf-adapting control parameters in differential evolution acomparative study on numerical benchmark problemsrdquo IEEETransactions on Evolutionary Computation vol 10 no 6 pp646ndash657 2006

[44] HWang ZWu S Rahnamayan and L Kang ldquoA scalability testfor accelerated de using generalized opposition-based learningrdquoin Proceedings of the 9th International Conference on IntelligentSystemsDesign andApplications (ISDA rsquo09) pp 1090ndash1095 PisaItaly December 2009

[45] F J Solis and R J B Wets ldquoMinimization by random searchtechniquesrdquo Mathematics of Operations Research vol 6 no 1pp 19ndash30 1981

[46] L-Y Tseng and C Chen ldquoMultiple trajectory search for largescale global optimizationrdquo in Proceedings of the IEEE Congresson Evolutionary Computation (CEC rsquo08) pp 3052ndash3059 IEEEHong Kong June 2008

[47] N Hansen and A Ostermeier ldquoAdapting arbitrary normalmutation distributions in evolution strategies the covariancematrix adaptationrdquo in Proceedings of the IEEE InternationalConference on Evolutionary Computation (ICEC rsquo96) pp 312ndash317 May 1996

[48] N Hansen and A Ostermeier ldquoCompletely derandomized self-adaptation in evolution strategiesrdquo Evolutionary Computationvol 9 no 2 pp 159ndash195 2001

[49] A Auger and N Hansen ldquoA restart CMA evolution strategywith increasing population sizerdquo in Proceedings of the IEEECongress on Evolutionary Computation (CEC rsquo05) pp 1769ndash1776 IEEE Press New York NY USA September 2005

[50] H Theil Economic Forecasts and Policy Contributions toEconomicAnalysis Series North-Holland NewYork NYUSA1961 httpbooksgoogleesbooksid=FPVdAAAAIAAJ

[51] G Taguchi S Chowdhury and Y Wu Taguchirsquos QualityEngineering Handbook John Wiley amp Sons 2005

[52] J H Zar Biostatistical Analysis Pearson New InternationalEdition Prentice-Hall 2013

[53] WWDanielAppliedNonparametric Statistics DuxburyThom-son Learning 1990

[54] S Garcıa D Molina M Lozano and F Herrera ldquoA study onthe use of non-parametric tests for analyzing the evolutionaryalgorithmsrsquo behaviour a case study on the CECrsquo2005 SpecialSession on Real Parameter Optimizationrdquo Journal of Heuristicsvol 15 no 6 pp 617ndash644 2009

[55] F Wilcoxon ldquoIndividual comparisons by ranking methodsrdquoBiometrics Bulletin vol 1 no 6 pp 80ndash83 1945

[56] A LaTorre S Muelas and J-M Pena ldquoEvaluating the multipleoffspring sampling framework on complex continuous opti-mization functionsrdquoMemetic Computing vol 5 no 4 pp 295ndash309 2013

[57] S Muelas J M Pena V Robles A LaTorre and P de MiguelldquoMachine learningmethods to analyze migration parameters inparallel genetic algorithmsrdquo in Proceedings of the InternationalWorkshop on Hybrid Artificial Intelligence Systems 2007 ECorchado J M Corchado and A Abraham Eds pp 199ndash206Springer Salamanca Spain 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 17: Research Article Comparative Study of …downloads.hindawi.com/journals/aaa/2015/708131.pdfis constant, we rename it to 0 and we n ally obtain a solutionto in the following form: (

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of