[Progress in Brain Research] Models of Brain and Mind - Physical, Computational and Psychological Approaches Volume 168 || The emergence of mind and brain: an evolutionary, computational, and philosophical approach

Download [Progress in Brain Research] Models of Brain and Mind - Physical, Computational and Psychological Approaches Volume 168 || The emergence of mind and brain: an evolutionary, computational, and philosophical approach

Post on 07-Apr-2017




0 download

Embed Size (px)


<ul><li><p>R. Banerjee &amp; B.K. Chakrabarti (Eds.)</p><p>Progress in Brain Research, Vol. 168</p><p>ISSN 0079-6123</p><p>Copyright r 2008 Elsevier B.V. All rights reservedCHAPTER 10The emergence of mind and brain: an evolutionary,computational, and philosophical approachKlaus MainzerChair for Philosophy of Science, Institute of Interdisciplinary Informatics, University of Augsburg,D-86135 Augsburg, Germany</p><p>Abstract: Modern philosophy of mind cannot be understood without recent developments in computerscience, artificial intelligence (AI), robotics, neuroscience, biology, linguistics, and psychology. Classicalphilosophy of formal languages as well as symbolic AI assume that all kinds of knowledge must explicitlybe represented by formal or programming languages. This assumption is limited by recent insights into thebiology of evolution and developmental psychology of the human organism. Most of our knowledge isimplicit and unconscious. It is not formally represented, but embodied knowledge, which is learnt by doingand understood by bodily interacting with changing environments. That is true not only for low-level skills,but even for high-level domains of categorization, language, and abstract thinking. The embodied mindis considered an emergent capacity of the brain as a self-organizing complex system. Actually, self-organization has been a successful strategy of evolution to handle the increasing complexity of the world.Genetic programs are not sufficient and cannot prepare the organism for all kinds of complex situations inthe future. Self-organization and emergence are fundamental concepts in the theory of complex dynamicalsystems. They are also applied in organic computing as a recent research field of computer science.Therefore, cognitive science, AI, and robotics try to model the embodied mind in an artificial evolution.The paper analyzes these approaches in the interdisciplinary framework of complex dynamical systems anddiscusses their philosophical impact.</p><p>Keywords: brain; mind; complex systems; nonlinear dynamics; self-organization; computational systems;artificial mindsFrom linear to nonlinear dynamics</p><p>The brain is a complex cellular system of 1011</p><p>neurons and 1014 synaptic connections. In order tounderstand and to model the emergence of itsmental functions, we must study the nonlineardynamics of complex systems. In general, aCorresponding author. Tel.: +49-821-598-5568; Fax: +49-821-598-5584; E-mail: klaus.mainzer@philuni-augsburg.de</p><p>DOI: 10.1016/S0079-6123(07)68010-8 115dynamical system is a time-depending multi-component system of elements with local statesdetermining a global state of the whole system.In a planetary system, for example, the state ofa planet at a certain time is determined by itsposition and momentum. The states can also referto moving molecules in a gas, the excitation ofneurons in a neural network, nutrition of organ-isms in an ecological system, supply and demandof economic markets, the behavior of social groupsin human societies, routers in the complex network</p><p>dx.doi.org/10.1016/S0079-6123(07)68010-8.3d</p></li><li><p>116of the internet, or units of a complex electronicequipment in a car. The dynamics of a system, thatis, the change of systems states depending on time,is represented by linear or nonlinear differentialequations. In the case of nonlinearity, several feed-back activities take place between the elements ofthe system. These many-body problems correspondto nonlinear and nonintegrable equations withinstabilities and sometimes chaos (Mainzer, 2007).</p><p>From a philosophical point of view, mathema-tical linearity means a strong concept of causalitywith similar causes or inputs of a dynamical systemleading to similar effects or outputs: small changesin the parameters or small perturbations added tothe values of the variables produce small changesin subsequent values of the variables. Further on,composite effects of linear systems can be reducedto the sum of more simple effects. Therefore,scientists have used linear equations to simplifythe way in which we think about the behavior ofcomplex systems. The principle of superpositionhas its roots in the concept of linearity. But, in thecase of nonlinearity, similar causes lead toexponentially separating and expanding effects:small changes in the parameters or small perturba-tions added to the values of the variables canproduce enormous changes in subsequent values ofthe variables because of its sensitivity to initialconditions. In this case, the whole is more than thesum of its elements.</p><p>The mathematical theory of nonlinear dynamicsdistinguishes different types of time-dependingequations, generating different types of behavior,such as fixed points, limit cycles, and chaos. In atop-down approach of model building, we startwith an assumed mathematical model of a naturalor technical system and deduce its behavior bysolving the corresponding dynamical equationsunder certain initial conditions. The solutions canbe represented geometrically as trajectories in thephase space of the dynamical system and classifiedby different types of attractors. But, in practice, weoften adopt the opposite method of a bottom-upapproach. Physicists, chemists, biologists, physi-cians, or engineers start with data mining inan unknown field of research. They only get afinite series of measured data corresponding totime-depending events of a dynamical system.From these data they must reconstruct thebehavior of the system in order to guess its typeof a dynamical equation. Therefore, the bottom-upapproach is called time series analysis. In manycases, we have no knowledge of the system fromwhich the data was acquired. Time series analysisthen aims to construct a black box, which takethe measured data as input and provides asoutput a mathematical model describing the data(Small, 2005; Floridi, 2004). In practice, therealistic strategy of research is a combination ofthe top-down approach with model building andthe bottom-up approach with time series analysisof the measured data.</p><p>In classical measurement theory, measurementerror is analyzed by statistical methods, such ascorrelation coefficient and autocorrelation func-tion. But these standard procedures are not ableto distinguish between data from linear andnonlinear models. In nonlinear data analysis, themeasured data are used in a first step toreconstruct the dynamics of the system in a phasespace. Nonlinear dynamical systems generatingchaos must be determined by at least threeequations. For example, a three-dimensionalattractor is generated in a phase space with threecoordinates x(t), y(t), and z(t), which are deter-mined by three time-depending nonlinear differ-ential equations. But, in practice, it is oftendifficult to distinguish several variables of asystem. Nevertheless, if only one variable can bemeasured, an attractor with a finite number ofdimensions can be reconstructed from the mea-sured time series with great similarity to theoriginal attractor of the system. We must onlyassume that we can also measure the derivative ofthat variable, and further higher order derivativesup to some finite level d. Then, if the dimensionof the system is less than d, we have enoughinformation to completely describe the system withd differential or difference equations. Measuring dderivatives is equivalent to measuring the systemat d different time intervals. Therefore, accordingto Takens theorem, the measured time series ofa variable can be embedded in a reconstructedphase space with d dimensions. The sequence ofpoints created by embedding the measured timeseries is a reconstructed trajectory of the original</p></li><li><p>117system, generating an attractor with great simila-rity to the original one of the system.</p><p>In practice, decisions about chaotic dynamicsare rather difficult. How can we decide that a timeseries of measured data is not generated by noisyirregularity but by highly structured chaoticattractors? A chaotic attractor is determined by atrajectory in a bounded region of a phase spacewith aperiodic behavior and sensitive dependenceon initial conditions. These criteria determinism,boundedness, aperiodicity, and sensitivity canbe checked by several techniques of time seriesanalysis. In the case of noise, the trajectories spreadunbounded all over the phase space. A chaoticattractor is finite and always bounded in a certainregion of the phase space. Aperiodicity meansthat the states of a dynamical system never returnto their previous values. But values of statesmay return more or less to the vicinity of previousvalues. Thus, aperiodicity is a question of degreewhich can be studied in recurrence plots ofmeasured points. Such plots depict how thereconstructed trajectory recurs or repeats itself.The correlation integral defines the density ofpoints in a recurrence plot where the measuredtime series are closer than a certain degree ofdistance.</p><p>If a time series is generated by a chaotic system,the trajectory of the time series, which is recon-structed from the measurement data of embed-ding, has the same topological properties as theoriginal attractor of the system, as long as theembedding dimension is large enough. Takensproved a method for finding an appropriateembedding dimension for the reconstruction ofan attractor. But this method yields no procedurefor finding a chaotic attractor, because its existencehas been already assumed in order to determine itsdimension from the measured data.</p><p>Another way to characterize chaotic dynamics isto measure the strength of their sensitive depen-dence on initial data. Consider two trajectoriesstarting from nearly the same initial data. Inchaotic dynamics only a tiny difference in theinitial conditions can result in the two trajectoriesdiverging with exponential speed in the phasespace after a short period of time. In this case, it isdifficult to calculate long-term forecasts, becausethe initial data can only be determined with afinite degree of precision. Tiny deviations in digitsbehind the decimal point of measurement datamay lead to completely different forecasts. This isthe reason why attempts to forecast weather failin an unstable and chaotic situation. In principle,the wing of a butterfly may cause a global changeof development. This butterfly effect can bemeasured by the so-called Lyapunov exponent L.A trajectory x(t) starts with an initial state x(0).If it develops exponentially fast, then it isapproximately given by |x(t)|B|x(0)|eLt. The expo-nent is smaller than zero if the trajectory isattracted by attractors, such as stable points ororbits. It is larger than zero if it is divergent andsensitive to very small perturbations of the initialdata.</p><p>An attractor is typically a finite region in thephase space. Sensitivity to initial conditions meansthat any nearby points on the attractor in thephase space diverge from each other. They cannot,however, diverge forever, because the attractor isfinite. Thus, trajectories from nearby initial pointson the attractor diverge and are folded back ontothe attractor, diverge and are folded back, etc.The structure of the attractor consists of manyfine layers, like an exquisite pastry. The closerone looks, the more detail in the adjacent layersof the trajectories is revealed. Thus, the attractoris fractal. An attractor that is fractal is calledstrange. There are also chaotic systems that areonly exponentially sensitive to initial conditionsbut not strange. Attractors can also be strange(fractal), but not chaotic. The fractal dimension ofan attractor is related to the number of indepen-dent variables needed to generate the time seriesof the values of the variables. If d is the smallestinteger greater than the fractal dimension of theattractor, then the time series can be generated bya set of d differential equations with d independentvariables. For example, a strange attractor offractal dimension 2.03 needs three nonlinearcoupled differential equations to generate itstrajectory.</p><p>In summary, dynamical systems can be classifiedby attractors with increasing complexity from fixedpoints, periodic and quasi-periodic up to chaoticbehavior. This classification of attractors can be</p></li><li><p>118characterized by different methods, such as typicalpatterns of time series, their power spectrum,phase portraits in a phase space, Lyapunovexponents, or fractal dimensions. A remarkablemeasure of complexity is the KolmogorovSinai(KS) entropy, measuring the information flowin a dynamical system (Deco and Schurmann,2000; Mainzer, 2007). A dynamical system can beconsidered an information-processing machine,computing a present or future state as outputfrom an initial past state of input. Thus, thecomputational efforts to determine the states of asystem characterize the computational complexityof a dynamical system. The transition from regularto chaotic systems corresponds to increasingcomputational problems, according to the compu-tational degrees in the theory of computationalcomplexity. In statistical mechanics, the informa-tion flow of a dynamical system describes theintrinsic evolution of statistical correlationsbetween its past and future states. The KS-entropyis an extremely useful concept in studying the lossof predictable information in dynamical systems,according to the complexity degrees of theirattractors. Actually, the KS-entropy yields ameasure of the prediction uncertainty of a futurestate provided the whole past is known (with finiteprecision).</p><p>In the case of fixed points and limit cycles,oscillating or quasi-oscillating behavior, there is nouncertainty or loss of information, and theprediction of a future state can be computed fromthe past. Consequently, the KS-entropy is zero.In chaotic systems with sensitive dependence onthe initial states, there is a finite loss of informa-tion for predictions of the future, according to thedecay of correlations between the past states andthe future state of prediction. The finite degree ofuncertainty of a predicted state increases linearlyto its number of steps in the future, given the entirepast. In the case of chaos, the KS-entropy has afinite value (larger than zero). But in the case ofnoise, the KS-entropy becomes infinite, whichmeans a complete loss of predicting informationcorresponding to the decay of all correlations (i.e.,statistical independence) between the past and thenoisy state of the future. The degree of uncertaintybecomes infinite.Self-organization and emergence in evolution</p><p>How can the knowledge of chaos be applied inorder to control risky and unstable situations incomplex systems? This question will be a challengefor modeling the brain with millions of interactingcells in nonlinear dynamics. It seems to beparadoxical that chaotic systems which are extre-mely sensitive to the tiniest fluctuations can becontrolled. But nowadays the control of chaos hasbeen realized in chemical, fluid, and biologicalsystems. In technology, for example, the intrinsicinstability of chaotic celestial orbits is routinelyused to advantage by international space agencieswho divert space...</p></li></ul>