emergent models of supple dynamics in life and mind

23
BRAIN AND COGNITION 34, 5–27 (1997) ARTICLE NO. BR970904 Emergent Models of Supple Dynamics in Life and Mind Mark A. Bedau Department of Philosophy, Reed College The dynamical patterns in mental phenomena have a characteristic suppleness—a looseness or softness that persistently resists precise formulation—which apparently underlies the frame problem of artificial intelligence. This suppleness also under- mines contemporary philosophical functionalist attempts to define mental capacities. Living systems display an analogous form of supple dynamics. However, the supple dynamics of living systems have been captured in recent artificial life models, due to the emergent architecture of those models. This suggests that analogous emergent models might be able to explain supple dynamics of mental phenomena. These emergent models of the supple mind, if successful, would refashion the nature of contemporary functionalism in the philosophy of mind. 1997 Academic Press 1. QUESTIONS ABOUT DYNAMICS IN MIND AND LIFE Pattern permeates the dynamics of our mental lives. Our beliefs and de- sires arise, evolve, and decay, for example, in relationship with our experi- ences, our other mental states, and our behavior, in more or less regular ways. These dynamical patterns are real even though they are not mechanical or exceptionless, even though they hold only ceteris paribus, that is, only if everything else is equal. And we are all at least roughly familiar with the overall shape of these global mental patterns; in fact, their familiarity to For valuable discussion, many thanks to Colin Allen, Hugo Bedau, Kate Elgin, Bob French, Mark Hinchliff, Cliff Hooker, Terry Horgan, Melanie Mitchell, Norman Packard, David Reeve, Dan Reisberg, Carol Voeller, two anonymous reviewers, and audiences at Dartmouth College, Oregon State University, Portland State University, Pomona College, University of British Columbia, University of California at Los Angeles, University of California at San Diego, University of Illinois at Urbana–Champaign, University of Newcastle, University of New Hampshire, and Tufts University. Thanks also to my fellow panelists at the Workshop on Artificial Life, ‘‘A Bridge Toward a New Artificial Intelligence,’’ at the University of the Basque Country, where some of these ideas were discussed. For grant support that helped make this work possible, thanks to the Oregon Center for the Humanities and the Oregon Humanities Council. Address reprint requests to Mark A. Bedau at 3213 SE Woodstock Blvd., Portland, OR 97202. E-mail: [email protected]. Fax: (503) 777-7769. 5 0278-2626/97 $25.00 Copyright 1997 by Academic Press All rights of reproduction in any form reserved.

Upload: mark-a-bedau

Post on 16-Oct-2016

217 views

Category:

Documents


2 download

TRANSCRIPT

BRAIN AND COGNITION 34, 5–27 (1997)ARTICLE NO. BR970904

Emergent Models of Supple Dynamics in Life and Mind

Mark A. Bedau

Department of Philosophy, Reed College

The dynamical patterns in mental phenomena have a characteristic suppleness—alooseness or softness that persistently resists precise formulation—which apparentlyunderlies the frame problem of artificial intelligence. This suppleness also under-mines contemporary philosophical functionalist attempts to define mental capacities.Living systems display an analogous form of supple dynamics. However, the suppledynamics of living systems have been captured in recent artificial life models, dueto the emergent architecture of those models. This suggests that analogous emergentmodels might be able to explain supple dynamics of mental phenomena. Theseemergent models of the supple mind, if successful, would refashion the nature ofcontemporary functionalism in the philosophy of mind. 1997 Academic Press

1. QUESTIONS ABOUT DYNAMICS IN MIND AND LIFE

Pattern permeates the dynamics of our mental lives. Our beliefs and de-sires arise, evolve, and decay, for example, in relationship with our experi-ences, our other mental states, and our behavior, in more or less regularways. These dynamical patterns are real even though they are not mechanicalor exceptionless, even though they hold only ceteris paribus, that is, onlyif everything else is equal. And we are all at least roughly familiar with theoverall shape of these global mental patterns; in fact, their familiarity to

For valuable discussion, many thanks to Colin Allen, Hugo Bedau, Kate Elgin, Bob French,Mark Hinchliff, Cliff Hooker, Terry Horgan, Melanie Mitchell, Norman Packard, DavidReeve, Dan Reisberg, Carol Voeller, two anonymous reviewers, and audiences at DartmouthCollege, Oregon State University, Portland State University, Pomona College, University ofBritish Columbia, University of California at Los Angeles, University of California at SanDiego, University of Illinois at Urbana–Champaign, University of Newcastle, University ofNew Hampshire, and Tufts University. Thanks also to my fellow panelists at the Workshopon Artificial Life, ‘‘A Bridge Toward a New Artificial Intelligence,’’ at the University of theBasque Country, where some of these ideas were discussed. For grant support that helpedmake this work possible, thanks to the Oregon Center for the Humanities and the OregonHumanities Council.

Address reprint requests to Mark A. Bedau at 3213 SE Woodstock Blvd., Portland, OR97202. E-mail: [email protected]. Fax: (503) 777-7769.

50278-2626/97 $25.00

Copyright 1997 by Academic PressAll rights of reproduction in any form reserved.

6 MARK A. BEDAU

ordinary folk has led them to be called a ‘‘folk theory’’ of the mind(Churchland, 1981).

There is no question that these patterns are an important facet of the mind.In fact, functionalism—the dominant position in contemporary philosophyof mind—uses these very patterns to define what it is to have a mind. Still,deep questions about the nature and status of these patterns remain open. Icontend in this paper that these patterns are difficult to describe and explainbecause of a special quality, explained below, which I call ‘‘suppleness.’’One typical sign of this suppleness is that the patterns can be adequatelydescribed only by employing ceteris paribus clauses or their equivalent.Since functionalism defines the mind in terms of the patterns exhibited bymental phenomena, functionalism needs a way to describe and explain thesuppleness of mental dynamics.

Patterns in the dynamics of living systems exhibit a similar sort of sup-pleness. Thus, suppleness in living systems might help illuminate supplemental dynamics, especially given that recent simulations in the new interdis-ciplinary science of artificial life provide strikingly plausible explanationssome of life’s supple dynamics. Two examples are the dynamics of flockingand of evolution. Reynold’s model of flocking is one of the simplest andmost widely known artificial life model, and its supple flocking dynamicsis especially vivid. Packard’s model of evolution is more complex becausethe microdynamics underlying it evolves over time, but the supple dynamicsin Packard’s model are more typical of work in artificial life. Both modelsare discussed below.

These two models illustrate how the distinctive emergent architecture ofartificial life models accounts for the supple dynamics of life. Section 2 intro-duces Reynolds’ model of flocking, explains the key concepts of supple dy-namics and emergent models, and then uses Reynolds’ model to illustrateboth concepts. Section 3 introduces Packard’s model of evolving life forms,and this is used to provide a more detailed example of supple dynamics andto show again how an emergent model explains this suppleness. Section 4shows that mental dynamics exhibits an analogous form of suppleness. Thesuccess of emergent models of supple dynamics in artificial life suggeststhat we might be able to explain the mind’s supple dynamics if we coulddevise models of mental phenomena with an analogous emergent architec-ture. Section 4 explores what these emergent models of supple mental dy-namics would look like. Finally, Section 5 shows how we can use this ideaof an emergent model of supple mental dynamics to devise an emergent formof functionalism which does justice to the mind’s supple dynamics.

2. REYNOLDS’ EMERGENT MODEL OF SUPPLE FLOCKING

Flocks of birds exhibit impressive macro-level behavior. One can easilyrecognize patterns or regularities in global flocking behavior. Collecting and

EMERGENT MODELS OF SUPPLE DYNAMICS 7

categorizing these regularities of flocking behavior yields a folk theory offlocking, analogous to folk theories of mental dynamics. The most obviousregularity of flocking is simply that the flock exists at all. While the individ-ual birds fly this way and that, at the global level a flock organizes andpersists. The flock maintains its cohesion while moving ahead, changing di-rection, or negotiating obstacles. These global patterns are especially impres-sive since they are achieved without any global control. No individual birdis issuing flight instructions to the rest of the flock; no central authority iseven aware of the global state of the flock. The global behavior is simplythe aggregate effect of the microcontingencies of individual bird trajectories.

I want to call attention to one particular feature of global-level flockingregularities—its suppleness. This suppleness is a certain kind of fluidity orsoftness in the regularities. For example, flocks maintain their cohesion notalways but only for the most part, only ceteris paribus. The fact that weneed to use ceteris paribus clauses or their equivalent to describe these regu-larities is one clue that the regularities are supple. The suppleness of theflock cohesion regularity is associated with the kind of exceptions that theregularity has. Sometimes the flock cannot maintain its cohesion becausethe wind is too strong (or predators are too plentiful, or the birds are toohungry, etc.). Other times the flock cohesion is broken because the flock fliesinto an obstacle (like a tree) and splits into two subflocks. Such flock splittingespecially reveals flocking’s suppleness, for flock splitting is an exceptionthat proves the rule that flocks maintain their cohesion. In these circum-stances, the best way for the birds to serve the underlying purposes of flock-ing is for them to split into two subflocks each of which then preserves itsown cohesion. Thus, in these circumstances, splitting actually reflects andserves the underlying goals that lead to the flock cohesion rule, while slav-ishly preserving the flock’s cohesion would have violated those goals.

In general, supple regularities share two features. First, the regularitieshave exceptions; they cannot be expressed as precise and exceptionless regu-larities. This open texture is often reflected in formulations of supple regulari-ties by the use of ceteris paribus clauses (or some similar phrase). Second,some of the regularity’s exceptions prove the rule; that is, they are appro-priate in the context since they achieve the system’s underlying goals betterthan slavishly following the rule would have, and they occur because theyare appropriate. These exceptions that prove the rule reflect an underlyingcapacity to respond appropriately to an unpredictable variety of contingen-cies.

That flocking consists of supple dynamical regularities is obvious enoughonce you look for it. It is interesting not because it is surprising but becauseof its implications for how to model flocking. Consider first what I call‘‘brute force’’ models of flocking. In a brute force flocking model, eachbird’s moment-to-moment trajectory in principle is affected by the behaviorof every other bird in the flock, i.e., by the global state of the flock. An

8 MARK A. BEDAU

illustration of this kind of model (in a slightly different context) is the com-puter animation method used in the Star Wars movies. In Star Wars we seecomputer animation not of bird flocks but of fleets of futuristic spaceships.Those sequences of computer-animated interstellar warfare between differ-ent fleets of spaceships were produced by human programmers carefullyscripting each frame, positioning each ship at each moment by reference toits relationship with (potentially) every other ship in the fleet. In other words,the programmer, acting like a God, is omniscient and omnipotent about thefleet’s global state and uses this information to navigate each ship.

This brute force modeling approach has two important consequences. Thefirst is that the behavior of the fleet seems a bit rigid or scripted; it does notlook entirely natural to the eye. This effect is not surprising, since producingnatural fleet behavior requires the programmer-as-God to properly anticipatethe contingent effects of minute adjustments in individual ship trajectories.In principle, the programmer can make the fleet behavior incrementally morenatural by adjusting individual trajectories; in practice, the programmingtime required grows prohibitively. (I have heard that the computer animationin Star Wars comprised the most expensive minutes of film ever produced.)Second, if the size of the fleet grows, the computational expense of the bruteforce modeling approach again grows prohibitively. Adding even one moreship in principle can require adjusting the behavior of every other ship inthe fleet. In other words, the brute force model succumbs to a combinatorialexplosion, and so is not feasibly computable.

Brute force models contrast with what I will call ‘‘emergent’’ models,which are nicely illustrated by Reynolds’ (1987, 1992) celebrated model offlocking ‘‘boids.’’ When one views Reynolds’ flocking demos, one is vividlystruck by how natural the flocking behavior seems. The boids spontaneouslyorganize into a flock that then maintains its cohesion as it moves and changesdirection and negotiates obstacles, fluidly flowing through space and time.The flock is a loosely formed group, so loose that individual boids sometimeslose contact with the rest of the flock and fly off on their own, only to rejointhe flock if they come close enough to the flock’s sphere of influence. Theflock appropriately adjusts its spatial configuration and motion in responseto internal and external circumstances. For example, the flock maintains itscohesion as it follows along a wall; also, the flock splits into two subflocksif it runs into a column, and then the two subflocks will merge back intoone when they have flown past the column. These dynamical flocking regu-larities are supple in the sense that their precise form varies in response tocontextual contingencies (the angle of the wall, the shape and distributionof the columns, etc.) so that the flock automatically adjusts its behavior ina way that is appropriate given these changing circumstances.

The boids model produces these natural, supple flocking dynamics as theemergent aggregate effect of micro-level boid activity. No entity in the boidsmodel has any information about the global state of the flock, and no entity

EMERGENT MODELS OF SUPPLE DYNAMICS 9

controls boid trajectories with global state information. No boid issues flightplans to the other boids. No programmer-as-God scripts specific trajectoriesfor individual boids. Instead, each individual boid’s behavior is determinedby three simple rules that key off of a boid’s neighbors: seek to maintain acertain minimum distance from nearby boids, seek to match the speed anddirection of nearby boids, and seek to steer toward the center of gravity ofnearby boids. (In addition, boids seek to avoid colliding with objects in theenvironment and are subject to the laws of physics.)

In order to appreciate in what sense the boids model is emergent, notethat it consists of a micro-level and a macro-level. I should stress that I amusing ‘‘micro’’ and ‘‘macro’’ in a generalized sense. Micro-level entitiesneed not be literally microscopic; birds are not. Micro and macro are relativeterms; an entity exists at a micro-level relative to a macro-level populationof similar micro-level entities. These levels can be nested. Relative to a flock,an individual bird is a micro-level entity; but an individual bird is a macro-level object relative to the micro-level genetic elements (say) that determinethe bird’s behavioral proclivities.

The boids model is emergent, in the sense intended here, because of theway in which they generate complex macro-level dynamics from simple mi-cro-level mechanisms. This form of emergence arises in contexts in whichthere is a system, call it S, composed out of ‘‘micro-level’’ parts. The numberand identity of these parts might change over time. S has various ‘‘macro-level’’ states (macrostates) and various ‘‘micro-level’’ states (microstates).S’s microstates are the states of its parts. S’s macrostates are structural prop-erties constituted wholly out of microstates; macrostates typically are variouskinds of statistical averages over microstates. Further, there is a relativelysimple and implementable microdynamic, call it D, which governs the timeevolution of S’s microstates. In general, the microstate of a given part of thesystem at a given time is a result of the microstates of ‘‘nearby’’ parts ofthe system at preceding times. Given these assumptions, I will say that amacrostate P of system S with microdynamic D is emergent if and only ifP (of system S) can be explained from D, given complete knowledge ofexternal conditions, but P can be predicted (with complete certainty) fromD only by simulating D, even given complete knowledge of external condi-tions. So, we can say that a model is emergent if and only if its macrostatesare emergent in the sense just defined.

Although this is not the occasion to develop and defend this concept ofemergence (see Bedau, 1997), I should clarify three things. First, ‘‘externalconditions’’ are conditions affecting the system’s microstates that are extra-neous to the system itself and its microdynamic. One kind of external condi-tion is the system’s initial condition. If the system is open, then another kindof external condition is the contingencies of the flux of parts and states intoS. If the microdynamic is nondeterministic, then each nondeterministic effectis another external condition.

10 MARK A. BEDAU

Second, given the system’s initial condition and other external conditions,the microdynamic completely determines each successive microstate of thesystem. The macrostate P is a structural property constituted out of the sys-tem’s microstates. Thus, the external conditions and the microdynamic com-pletely determine whether or not P obtains. In this specific sense, the micro-dynamic plus the external conditions ‘‘explain’’ P. One must not expect toomuch from these explanations. For one thing, the explanation depends on themassive contingencies under the initial conditions. It is awash with accidentalinformation about S’s parts. Furthermore, the explanation might be too de-tailed for anyone to ‘‘survey’’ or ‘‘grasp.’’ It might even obscure a simpler,macro-level explanation that unifies systems with different external condi-tions and different microdynamics. Nevertheless, since the microdynamicand external conditions determine P, they explain P.

Third, in principle we can always predict S’s behavior with complete cer-tainty, for given the microdynamic and external conditions we can alwayssimulate S as accurately as we want. Thus, the issue is not whether S’s behav-ior is predictable—it is, trivially—but whether we can predict S’s behavioronly by simulating S. When trying to predict a system’s emergent behavior,in general one has no choice but simulation. This notion of predictability onlythrough simulation is not anthropocentric; nor is it a product of some specifi-cally humancognitive limitation. Even aLaplacian supercalculatorwouldneedto observe simulations to discover a system’s emergent macrostates.

In this case of the boids model, individual boids are micro-level entities,and a boid flock is a macro-level entity constituted wholly by an aggregateof micro-level boids. Aside from the programmer’s direct control over a fewfeatures of the environment (placement of walls, columns, etc.), the model’sexplicit dynamics govern only the local behavior of the individual boids; theexplicit model is solely microdynamical. Each boid acts independently inthe sense that its behavior is determined solely by following the imperativesof its own internal rules. (Of course, all boids have the same internal rules,but each boid applies the rules in a way that is sensitive to the contingenciesof its own immediate environment.) An individual boid’s dynamical behav-ior affects and is affected by only certain local features of its environment—nearby boids and other nearby objects such as walls and columns. The boidsmodel contains no explicit directions for flock dynamics. The flock behaviorconsists of the aggregated individual boid trajectories and the flock’s implicitmacro-level dynamics are constituted out of the boid’s explicit micro-leveldynamics. The flock dynamic is emergent in our sense because, althoughit is constituted solely by the micro-level dynamics, it can be studied andunderstood in detail only empirically, through simulations.

3. PACKARD’S EMERGENT MODEL OF SUPPLE ADAPTATION

Evolving life forms display various macro-level patterns on an evolution-ary time scale. For example, advantageous traits that arise through mutations

EMERGENT MODELS OF SUPPLE DYNAMICS 11

tend, ceteris paribus, to persist and spread through the population. Further-more, organisms’ traits tend, within limits and ceteris paribus, to adapt tochanging environmental contingencies. Of course, these patterns are not pre-cise and exceptionless universal generalizations; they are vague generalitiesthat hold only for the most part. Some of this vagueness is due to context-dependent fluctuations in what is appropriate. In those cases, the macro-levelevolutionary dynamics are supple, in the sense intended here. These sortsof supple dynamics of adaptation result not from any explicit macro-levelcontrol (e.g., God does not adjust allele frequencies so that creatures arewell adapted to their environment); rather, they emerge statistically from themicro-level contingencies of natural selection.

Norman Packard devised a simple model of evolving sensorimotor agentswhich demonstrates how these sorts of supple, macro-level evolutionary dy-namics can emerge implicitly from an explicit microdynamical model (Pack-ard, 1989; Bedau & Packard, 1992; Bedau, Ronneburg, & Zwick, 1992; Be-dau & Bahm, 1994; Bedau, 1994; Bedau & Seymour, 1994; Bedau, 1995).What motivates this model is the view that evolving life is typified by apopulation of agents whose continued existence depends on their sensorimo-tor functionality, i.e., their success at using local sensory information to di-rect their actions in such a way that they can find and process the resourcesthey need to survive and flourish. Thus, information processing and resourceprocessing are the two internal processes that dominate the agents’ lives,and their primary goal—whether they know this or not—is to enhance theirsensorimotor functionality by coordinating these internal processes. Sincethe requirements of sensorimotor functionality may well alter as the contextof evolution changes, continued viability and vitality requires that sensorimo-tor functionality can adapt in an open-ended, autonomous fashion. Packard’smodel attempts to capture an especially simple form of this open-ended,autonomous evolutionary adaptation.

The model consists of a finite two-dimensional world with a resource fieldand a population of agents. An agent’s survival and reproduction are deter-mined by the extent to which it finds enough resources to stay alive andreproduce, and an agent’s ability to find resources depends on its sensorimo-tor functionality—that is, the way in which the agent’s perception of itscontingent local environment affects its behavior in that environment. Anagent’s sensorimotor functionality is encoded in a set of genes, and thesegenes can mutate when an agent reproduces. Thus, on an evolutionary timescale, the process of natural selection implicitly adapts the population’s sen-sorimotor strategies to the environment. Furthermore, the agents’ actionschange the environment because agents consume resources and collide witheach other. This entails that the mixture of sensorimotor strategies in thepopulation at a given moment is a significant component of the environmentthat affects the subsequent evolution of those strategies. Thus, the ‘‘fitnessfunction’’ in Packard’s model—what it takes to survive and reproduce—is

12 MARK A. BEDAU

constantly buffeted by the contingencies of natural selection and unpredict-ably changes (Packard, 1989).

All macro-level evolutionary dynamics produced by this model ultimatelyare the result of an explicit micro-level microdynamic acting on externalconditions. The model explicitly controls only local micro-level states: re-sources are locally replenished, an agent’s genetically encoded sensorimotorstrategy determines its local behavior, an agent’s behavior in its local envi-ronment determines its internal resource level, an agent’s internal resourcelevel determines whether it survives and reproduces, and genes randomlymutate during reproduction. Each agent is autonomous in the sense that itsbehavior is determined solely by the environmentally sensitive dictates of itsown sensorimotor strategy. On an evolutionary time scale these sensorimotorstrategies are continually refashioned by the historical contingencies of natu-ral selection. The aggregate long-term behavior of this microdynamic gener-ates macro-level evolutionary dynamics only as the indirect product of anunpredictably shifting agglomeration of directly controlled micro-levelevents (individual actions, births, deaths, mutations). Many of these evolu-tionary dynamics are emergent; although constituted and generated solelyby the micro-level dynamic, they can be derived only through simulations.I will illustrate these emergent dynamics with some recent work concerningthe evolution of evolvability (Bedau and Seymour, 1994).

The ability to successfully adapt depends on the availability of viable evo-lutionary alternatives. An appropriate quantity of alternatives can make evo-lution easy; too many or too few can make evolution difficult or even impos-sible. For example, in Packard’s model, the population can evolve bettersensorimotor strategies only if it can ‘‘test’’ sufficiently many sufficientlynovel strategies; in short, the system needs a capacity for evolutionary ‘‘inno-vation.’’ At the same time, the population’s sensorimotor strategies can adaptto a given environment only if strategies that prove beneficial can persist inthe gene pool; in short, the system needs a capacity for evolutionary‘‘memory.’’

Perhaps the simplest mechanism that simultaneously affects both memoryand innovation is the mutation rate. The lower the mutation rate, the greaterthe number of genetic strategies ‘‘remembered’’ from parents. At the sametime, the higher the mutation rate, the greater the number of ‘‘innovative’’generic strategies introduced with children. Successful adaptability requiresthat these competing demands for memory and innovation be suitably bal-anced. Too much mutation (not enough memory) will continually flood thepopulation with new random strategies; too little mutation (not enough inno-vation) will tend to freeze the population at arbitrary strategies. Successfulevolutionary adaptation requires a mutation rate suitably intermediate be-tween these extremes. Furthermore, a suitably balanced mutation rate mightnot remain fixed, for the balance point could shift as the context of evolutionchanges.

EMERGENT MODELS OF SUPPLE DYNAMICS 13

One would think, then, that any evolutionary process that could continu-ally support evolving life must have the capacity to adapt automatically tothis shifting balance of memory and innovation. So, in the context of Pack-ard’s model, it is natural to ask whether the mutation rate that governs first-order evolution could adapt appropriately by means of a second-order pro-cess of evolution. If the mutation rate can adapt in this way, then this modelwould yield a simple form of the evolution of evolvability and, thus, mightilluminate one of life’s fundamental prerequisites.

Previous work (Bedau and Bahm, 1994) with fixed mutation rates in Pack-ard’s model revealed two robust effects. The first effect was that the mutationrate governs a phase transition between genetically ‘‘ordered’’ and geneti-cally ‘‘disordered’’ systems. When the mutation rate is too far below thephase transition, the whole gene pool tends to remain ‘‘frozen’’ at a givenstrategy; when the mutation rate is significantly above the phase transition,the gene pool tends to be a continually changing plethora of randomly relatedstrategies. The phase transition itself occurs over a critical band in the spec-trum of mutation rates, µ, roughly in the range 1023 # µ # 1022. The secondeffect was that evolution produces maximal population fitness when mutationrates are around values just below this transition. Apparently, evolutionaryadaptation happens best when the gene pool tends to be ordered but just onthe verge of becoming disordered.

In the light of our earlier suppositions about balancing the demands formemory and innovation, the two fixed-mutation-rate effects suggest the bal-ance hypothesis that the mutation rates around the critical transition betweengenetic order and disorder optimally balance the competing evolutionary de-mands for memory and innovation. We can shed some light on the balancehypothesis by modifying Packard’s model so that each agent has an addi-tional gene encoding its personal mutation rate. In this case, two kinds ofmutation play a role when an agent reproduces: (i) the child inherits its par-ents’ sensorimotor genes, which mutate at a rate controlled by the parents’personal (genetically encoded) mutation rate; and (ii) the child inherits itsparents’ mutation rate gene, which mutates at a rate controlled by a popula-tion-wide meta-mutation rate. Thus, first-order (sensorimotor) and second-order (mutation rate) evolution happen simultaneously. So, if the balancehypothesis is right and mutation rates at the critical transition produce opti-mal conditions for sensorimotor evolution because they optimally balancememory and innovation, then we would expect second-order evolution todrive mutation rates into the critical transition. It turns out that this is exactlywhat happens.

Figure 1 shows four examples of how the distribution of mutation ratesin the population change over time under different conditions. As a control,distributions (a) and (b) show what happens when the mutation rate genesare allowed to drift randomly: the bulk of the distribution wanders aimlessly.By contrast, distributions (c) and (d) illustrate what happens when natural

14 MARK A. BEDAU

FIG. 1. Evolutionary dynamics in mutation rate distributions from four simulations of Pack-ard’s model of sensorimotor agents. Time is on the X axis (100,000 timesteps) and mutationrate is on the Y axis. The gray scale at a given point (t,m) in this distribution shows thefrequency of the mutation rate m in the population at time t. See text.

selection affects the mutation rate genes: the mutation rates drop dramati-cally. The meta-mutation rate is lower in (a) than in (b) and so, as wouldbe expected, distribution (a) is narrower and changes more slowly. Similarly,the meta-mutation rate is lower in (c) than in (d), which explains why distri-bution (c) is narrower and drops more slowly.

If we examine many simulations and collect suitable macrostate informa-tion, we notice the pattern predicted by the balance hypothesis: Second-orderevolution tends to drive mutation rates down to the transition from genetic

EMERGENT MODELS OF SUPPLE DYNAMICS 15

disorder to genetic order, increasing population fitness in the process. Thispattern is illustrated in Fig. 2, which shows time series data from a typicalsimulation. The macrostates depicted in Fig. 2 are (from top to bottom):(i) the mutation rate distribution, as in Fig. 1; (ii) a blow up of the mutationrate distribution that allows us to distinguish very small mutation rates (binsdecrease in size by a factor of 10, e.g., the top bin shows mutation ratesbetween 1020 and 1021, the next bin down shows mutation rates between1021 and 1022, etc.); (iii) the mean mutation rate (note the log scale); (iv)the uningested resources in the environment; (v) three aspects of the geneticdiversity in the population’s sensorimotor strategies; and (vi) the populationlevel.

The composite picture provided by Fig. 2 can be crudely divided into threeepochs: an initial period of (relatively) high mutation rates, during the timeperiod 0–20,000; a transitional period of falling mutation rates, during thetime period 20,000–40,000; and a final period of relatively low mutationrates, throughout the rest of the simulation. The top three time series aredifferent perspectives on the falling mutation rates, showing that the mutationrates adapt downward until they cluster around the critical transition region,1023 # µ # 1022. Since resources flow into the model at a constant rate andsince survival and reproduction consume resources, the uningested resourceinversely reflects the population fitness. We see that the population becomesmore fit (i.e., more efficiently gathers resources) at the same time as themutation rates drop. Although this is not the occasion to review the differentways to measure the diversity of the sensorimotor strategies in the popula-tion, we can easily recognize that there is a significant qualitative differencebetween the diversity dynamics in the initial and final epochs. In fact, thesequalitative differences are characteristic of precisely the difference betweena disordered gene pool of randomly related strategies and a gene pool thatis at or slightly below the transition between genetic order and disorder (seeBedau & Bahm, 1994; Bedau, 1995).

If the balance hypothesis is the correct explanation of this second-orderevolution of mutation rates into the critical transition, then we should be ableto change the mean mutation rate by dramatically changing where memoryand innovation are balanced. In fact, the mutation rate does rise and fallalong with the demands for evolutionary innovation. For example, when werandomize the values of all the sensorimotor genes in the entire population sothat every agent immediately ‘‘forgets’’ all the genetically stored informationlearned by its genetic lineage over its entire evolutionary history, the popula-tion must restart its evolutionary learning job from scratch. It has no immedi-ate need for memory (the gene pool contains no information of provenvalue); instead, the need for innovation is paramount. Under these conditions,we regularly observe the striking changes illustrated around timestep333,333 in Fig. 3. The initial segment (timesteps 0–100,000) in Fig. 3 showsa mutation distribution evolving into the critical mutation region, just as in

16 MARK A. BEDAU

FIG. 2. Time series data from a simulation of Packard’s model of sensorimotor agents, show-ing how the population’s resource gathering efficiency increases when the mutation ratesevolve downward far enough to change the qualitative character of the population’s geneticdiversity. From top to bottom, the data are: (i) the mutation rate distribution; (ii) a blow upof very small mutation rates; (iii) the mean mutation rate (note the log scale); (iv) the unin-gested resource in the environment; (v) three aspects of the diversity of the sensorimotorstrategies in the population; (vi) the population level. See text.

EMERGENT MODELS OF SUPPLE DYNAMICS 17

FIG. 3. Time series data from a simulation of Packard’s model of sensorimotor agents. Fromtop to bottom, the data are: (i) a blow up of very small mutation rates in the mutation ratedistribution; (ii) mean mutation rate (note the log scale); (iii) the level of uningested resourcesin the world; (iv) population level. At timestep 333,333 all sensorimotor genes of all livingorganisms were randomly scrambled. See text.

Fig. 2 (but note that the time scale in Fig. 3 is compressed by a factor offive). However, at timestep 333,333 an external ‘‘act of God’’ randomlyscrambles all sensorimotor genes of all living organisms. At just this pointwe can note the following sequence of events: (a) the residual resource inthe environment sharply rises, showing that the population has become muchless fit; (b) immediately after the fitness drop the mean mutation rate dramati-cally rises as the mutation rate distribution shifts upward; (c) by the timethat the mean mutation rate has risen to its highest point the population’sfitness has substantially improved; (d) the fitness levels and mutation rateseventually return to their previous equilibrium levels.

All of these simulations show the dynamics of the mutation rate distribu-tion adjusting up and down as the balance hypothesis would predict. Tempo-rarily perturbing the context for evolution can increase the need for rapidexploration of a wide variety of sensorimotor strategies and thus dramaticallyshift the balance toward the need for innovation. Then, subsequent sensori-motor evolution can reshape the context for evolution in such a way that thebalance shifts back toward the need for memory. This all suggests that, cet-

18 MARK A. BEDAU

eris paribus, mutation rates adapt so as to balance appropriately the compet-ing evolutionary demands for memory and innovation, and that, ceteris pari-bus, this balance point is at the genetic transition from order to disorder. Anindefinite variety of environmental contingencies can shift the point at whichthe evolutionary need for memory and innovation are balanced, and the per-turbation experiments show how mutation rates can adapt up or down asappropriate. This supple flexibility in the dynamics of the evolution of evol-vability is the deep reason why the principle that, on the whole, mutationrates adapt as appropriate will resist any precise and exceptionless formula-tion.

This sort of supple adaptability in Packard’s model can be counted amongthe hallmarks of life in general (Maynard Smith 1975; Cairns-Smith 1985;Bedau 1996a). And, clearly, these evolutionary dynamics are emergent. Themodel’s macro-level dynamic is wholly constituted and generated by its mi-cro-level phenomena, but the micro-level phenomena involve such a kaleido-scopic array of nonadditive interactions that the macro-level dynamics can-not be derived from micro-level information except by means of simulations,like those shown above. In a similar fashion, many other characteristic fea-tures of living systems can be captured as emergent phenomena in artificiallife models; see, e.g., Farmer, Lapedes, Packard, & Wendroff (1986), Lang-ton (1989b), Langton, Taylor, Farmer, & Rasmussen (1992), Varela & Bour-gine (1992), and Brooks & Maes (1994). In every case, supple macro-leveldynamics emerge from, and are explained by, an explicit micro-level dynam-ics in which a parallel, distributed network of communicating agents makedecisions about how to behave in their local environment based on selectiveinformation from their local environment. This growing empirical evidencecontinually reinforces the conclusion that the models’ emergent architectureis responsible for the supple dynamics. An open field of empirical investiga-tion in artificial life is to pin down more precisely exactly which featuresof emergent models are responsible for which aspects of supple emergentdynamics.

4. EMERGENT MODELS OF THE MIND’S SUPPLE DYNAMICS

The readily observable regularities and patterns in our mental lives havebeen termed ‘‘folk psychology’’ (e.g., Churchland, 1981). It has long beenknown that the global regularities of folk psychology must be qualified byceteris paribus clauses. Consider two typical principles, adapted from Hor-gan and Tiensen (1989), which, even though they are extremely simplified,can illustrate this phenomenon.

Means-ends reasoning. If X wants goal G and X believes that X can get G by per-forming action A, then ceteris paribus X will do A. For example, if X wants a beerand believes that there is one in the kitchen, then X will go get one—unless, as theceteris paribus clause signals, X does not want to miss any of the conversation, or

EMERGENT MODELS OF SUPPLE DYNAMICS 19

X does not want to offend the speaker by leaving in midsentence, or X does notwant to drink beer in front of his mother-in-law, or X thinks he should, instead, fleethe house since it is on fire, etc.

Belief extension (modus ponens). If X believes P and X believes P entails Q, thenceteris paribus X will come to believe Q. However, people sometimes fail to inferwhat is implied by their antecedent beliefs, for a variety of reasons. Lack of attentionor illogic is sometimes at work. However, some exceptions to the psychologicalprinciple of modus ponens reflect attentive logical acumen at its best. For example,if X has antecedent reason to doubt that Q is true, X might conclude that it is morereasonable to question P or to question that P entails Q.

The ceteris paribus clauses signal that these patterns in our mental liveshave exceptions. This open-ended range of exceptions is ubiquitous in thepatterns of our mind. Indefinitely many more examples like those two abovecan be generated. Further, this open-ended texture seems ineliminable; itapparently cannot be captured by any long but finite list of exceptions.

The exceptions to the principles of folk psychology come from differentsources. Some signal malfunctions in the underlying material and processesthat implement the mental processes (e.g., Fodor, 1981). Others result fromthe indeterminate results of competition among a potentially open-endedrange of conflicting desires (e.g., Horgan & Tienson, 1989, 1990). But certainexceptions reflect our ability to act appropriately in the face of an open-ended range of contextual contingencies. These are exceptions that provethe rule. The ability to figure out how to act appropriately in context is animportant part of the power of our mind; it is the very essence of intelligence(Beer, 1990; Varela, Thompson, & Rosch, 1991; Parisi, Nolfi, & Cecconi,1992; Cliff, Harvey, & Husbands, 1993; Steels, 1994). Since life can callfor us to cope with an open-ended range of novel challenges, it should beno surprise if the dynamical patterns of mind resist precise and exceptionlessformulation. Our mental dynamics, thus, exhibits a form of suppleness quitelike what we observed in flocking and evolution.

Since the suppleness of mental dynamics is crucially involved in the veryintelligence of mental capacities, any adequate account of the mind mustinclude an account of its suppleness. A good account of the suppleness ofa mental capacity must be precise, accurate (no false positives), complete(no false negatives), principled, and feasible. The virtues of precision, accu-racy, and completeness are obvious enough. A principled account wouldindicate what unifies the various instances of the supple capacity. And feasi-bility is important so that we can test empirically whether the account isaccurate and complete.

Although quite familiar, the suppleness of mental dynamics is difficult todescribe and explain. The familiar formulations of the principles of mindemploy ceteris paribus clauses, as in the two illustrations above (or theyuse equally vague clauses like ‘‘as appropriate’’). However, such vaguelyformulated principles give no indication of when ceteris is not paribus orwhen deviation from the norm is appropriate. Since these vague principles

20 MARK A. BEDAU

obscure both which contexts trigger exceptions and what form the exceptionstake, they are ineliminably imprecise, and thus they cannot be accurate, com-plete, principled, or feasible.

An alternative strategy for accounting for the suppleness of mental pro-cesses is, in effect, to predigest the circumstances that give rise to exceptionsand then specify (either explicitly or through heuristics) how to cope withthem in a manner that is precise enough to be expressed as an algorithm. Inthis spirit, so-called ‘‘expert systems’’ precisely encode the details of princi-ples and their exceptions in a knowledge base generated through consultationwith the relevant experts. This strategy yields models of mental capacitiesthat are explicit and precise enough to be implemented as a computer model,so the strategy has the virtue of feasibility; the dynamic behavior of themodel can be directly observed and tested for plausibility. The problem isthat, although these expert systems sometimes work well in precisely circum-scribed domains, they have systematically failed to produce the kind of sup-ple behavior that is characteristic of intelligent response to an open-endedvariety of circumstances. Their behavior is brittle; it lacks the context sensi-tivity that is distinctive of intelligence. This problem is not merely a limita-tion of present implementations; attempts to improve matters by amplifyingthe knowledge base only generate combinatorial explosion. The nature andcentral role of suppleness in our mental capacities helps explain why the so-called ‘‘frame problem’’ of artificial intelligence is so important and so dif-ficult to solve. (See, e.g., Dreyfus, 1979; Hofstadter, 1985; Holland, 1986;Langton, 1989a; Horgan & Tienson, 1989, 1990; Chalmers, French, & Hof-stadter, 1992.) Although precise and feasible and perhaps principled, the ex-pert-systems accounts of supple mental dynamics have always proved to beinaccurate and incomplete.

A third strategy for accounting for supple mental dynamics is to deviseemergent models analogous to the emergent artificial life models of flockingand evolution. After all, one of the hallmarks of emergent artificial life mod-els is their strikingly good accounts of the supple dynamics found throughoutliving systems. An emergent model of the mind would construe supple men-tal dynamics as the emergent macro-level effect of an explicit local dynamicin a population of micro-level entities. The members of the micro-level popu-lation would in some way compete for influence in a context-dependent man-ner, and thus would create some sort of adaptive macro-level dynamic. If allwent well, this macro-level dynamic would correspond well with the familiarsupple dynamics of mental life.

These remarks give no detailed account of the sort of emergent modelthat I have in mind, of course. Let there be no false advertising! My remarksat best just begin to suggest what an emergent model of supple mental dy-namics would be like. Emergent models have some similarity with someexisting models, such as those of Hofstadter and his students (Hofstadter,1985; Mitchell, 1993; French, 1995), classifier systems (Holland, 1986), and

EMERGENT MODELS OF SUPPLE DYNAMICS 21

connectionist (neural network, parallel distributed processing) models(Rumelhart & McClelland, 1986; Anderson & Rosenfeld, 1988). Delineatingthe relevant similarities and differences must be left for another occasion.Still, briefly contrasting emergent models with the widely known connec-tionist models can highlight what I consider to be the important features ofemergent models.

Emergent models of mental phenomena and connectionist models havesome striking similarities. First, both tend to produce fluid macro-level dy-namics as the implicit emergent effect of micro-level architecture. In addi-tion, both employ the architecture of a parallel population of autonomousagents following simple local rules. For one thing, the agents in an emergentmodel bear some analogy to the units in a connectionist net. Furthermore,the agents in many artificial life models are themselves controlled by internalconnectionist nets (e.g., Todd & Miller, 1991; Ackley & Littman, 1992; Be-lew, McInerney & Schraudolph, 1992; Cliff, Harvey, & Husbands, 1993;Parisi et al., 1992; Werner & Dyer, 1992). In addition, for decades connec-tionism has explored recurrent architectures and unsupervised adaptive learn-ing algorithms, both of which are echoed in a general manner in much artifi-cial life modeling.

There are important differences between typical artificial life models andmany of the connectionist models that have attracted the most attention, suchas feed-forward networks which learn by the back-propagation algorithm.First, the micro-level architecture of artificial life models is much more gen-eral, not necessarily involving multiple layers of nodes with weighted con-nections adjusted by learning algorithms. Second, emergent models employforms of learning and adaptation that are more general than supervised learn-ing algorithms like backpropagation. This allows artificial life models toside-step certain common criticisms of connectionism, such as the unnatural-ness of the distinction between training and application phases and the unnat-ural appeal to an omniscient teacher. Third, typical connectionist modelspassively receive prepackaged sensory information produced by a humandesigner. In addition, they typically produce output representations that havemeaning only when properly interpreted by the human designer. The sort ofemergent models characteristic of artificial life, by contrast, remove the hu-man from the sensorimotor loop. A micro-level agent’s sensory input comesdirectly from the environment in which the agent lives, the agent’s outputcauses actions in that same environment, and those actions have an intrinsicmeaning for the agent (e.g., its bearing on the agent’s survival) in the contextof its life. Through their actions, the agents play an active role in controllingtheir own sensory input and reconstructing the own environment (Bedau,1994, 1996b). Finally, the concern in the bulk of existing connectionist mod-eling is with equilibrium behavior that settles onto stable attractors. By con-trast, partly because the micro-level entities are typically always recon-structing the environment to which they are adapting, the behavior of the

22 MARK A. BEDAU

emergent models I have in mind would be characterized by a continual, open-ended evolutionary dynamic that never settles onto an attractor in any inter-esting sense.

Neuroscientists sometimes claim that macro-level mental phenomena can-not be understood without seeing them as emerging from micro-level activ-ity. Churchland & Sejnowski (1992), for example, argue that the brain’scomplexity forces us to study macro-level mental phenomena by means ofmanipulating micro-level brain activity. However, on this picture manipulat-ing the mind’s underlying micro-level activity is merely a temporary practi-cal expedient, a means for coming to grasp the mind’s macro-level dynamics.Once the micro-level tool has illuminated the macro-level patterns, it hasoutlived its usefulness and can be abandoned. No permanent, intrinsic con-nection binds our understanding of micro and macro. By contrast, my pointof view is that the mind’s macro-level dynamics can be adequately describedor explained only by making essential reference to the micro-level activityfrom which it emerges. The microdynamical model in a sense is a completeand compact description and explanation of the macro-level dynamics. Sincethese global patterns are supple, they inevitably have exceptions, and thoseexceptions (some of them) prove the rule in the sense that they reveal theglobal pattern’s central and underlying nature. Thus, to get a precise anddetailed description of the macro-level patterns, there is no alternative tosimulating the microdynamical model. In this way, the microdynamicalmodel is ineliminably bound to our understanding of the emergent macro-level dynamics.

Since the ultimate plausibility of the emergent approach to mental dynam-ics depends, as one might say, on ‘‘putting your model where your mouthis,’’ one might in fairness demand proponents of this approach to start build-ing models. However, producing models is not easy; a host of difficult issuesmust be faced. First, what is the micro-level population from which mentalphenomena emerge, and what explicit micro-level dynamics govern it? Sec-ond, assuming we have settled on a micro-level population and dynamics,how can we identify the macro-level dynamics of interest? Emergent modelsgenerate copious quantities of micro-level information, and this saddles uswith a formidable data-reduction problem. Where should we make a thinslice in these data? Finally, assuming we have a satisfactory solution to thedata reduction problem, how can we recognize and interpret any patternsthat might appear? We must distinguish real patterns from mere artifacts.The patterns will not come prelabeled; how are we to recognize any supplemental dynamics that might emerge?

The foregoing difficulties are worth confronting, though, for an emergentaccount of supple mental dynamics would have important virtues. The ac-count would be precise, just as precise as the emergent model itself. Further-more, the model would produce an increasingly complete description of aprecise set of mental dynamics as it was simulated more and more. Though

EMERGENT MODELS OF SUPPLE DYNAMICS 23

the model might never completely fill out all the details of the supple pattern,additional simulation could generate as much detail as desired. In addition,the model’s account of the supple dynamics would be principled, since oneand the same model would generate the supple pattern along with all theexceptions that prove the rule. As for accuracy, this could be discerned onlythrough extensive empirical study (simulation). Still, the evident accuracyof the emergent models of supple flocking and supple evolution can giveus some confidence that emergent models of supple mental dynamics are apromising avenue to explore.

5. EMERGENT FUNCTIONALISM ABOUT THE MIND

The mind’s supple dynamics has implications for some current philosophi-cal controversies. I will focus in particular on some implications for contem-porary functionalism in the philosophy of mind (Putnam, 1975; Fodor, 1981).Much contemporary debate over functionalism focuses on a certain collec-tion of problems, such as whether functionalism can account for the con-sciousness of mental beings and the intentionality of their mental states, butI will not engage those debates here; see Block (1980) and Lycan (1990)for a representative range of reading about functionalism. My concern isonly with the consequences for functionalism of the suppleness of mentaldynamics.

Contemporary philosophical functionalism must be sharply distinguishedfrom the traditional functionalism in psychology advocated by James andDewey in the 1890s, which served as a precursor of the behaviorism of Wat-son in the 1920s. Contemporary functionalism grew up in the 1970s as aresult of the problems with its two predecessors: behaviorism and the mind–brain identity theory. The lesson from the critics of behaviorism was thatmental systems have internal mental states. In particular, a mental system’saction in a given environment is affected by a complex web of interactionsamong its internal (mental) states and its sensory stimuli. The lesson fromthe critics of the mind–brain identity theory was that a mental system’s inter-nal states can be instantiated in an open-ended number of different kinds ofphysical states. No (first-order) physical similarity unifies all the possiblephysical instances of a given mental state. To meet these objections, earlyfunctionalists proposed that we view the mind as analogous with software.An indefinite number of different states in different kinds of hardware couldrealizing a given software state; by analogy, an indefinite range of physicaldevices or processes could embody a mind. Functionalism’s slogan, then,could be ‘‘mind as software.’’ Just as computation theory studies classes ofautomata independently of their hardware implementation, functionalism’sguiding idea is that we can study the dynamics of mental states in abstractionfrom their implementation in the brain.

Functionalists view mental beings as a certain kind of input–output device

24 MARK A. BEDAU

and hold that having a mind is no more and no less than having a set ofinternal states that causally interact (or function) with respect to each otherand with respect to environmental inputs and behavioral outputs in a certaincharacteristic way; a mental system is any system whatsoever that is gov-erned by a set of internal states that exhibit a dynamical behavior that isfunctionally isomorphic with the dynamic characteristic of human mentalstates. If the mental system is a physical entity, its characteristic internalstates will always be realized in some kind of physical entity, but it makes nodifference what kind of physical entity instantiates the functionally defineddynamical patterns. Human mental states happen to be embodied in patternsof neuronal activity, but if exactly the same dynamical patterns were foundin a system composed of quite different materials—such as silicon cir-cuitry—then, according to functionalism, that system would literally havea mind. So, functionalism’s central tenet is that mind is defined by formrather than matter; to have a mind is to embody a distinctive dynamicalpattern, not to be composed out of a distinctive sort of substance.

As we saw in the preceding section, the dynamical patterns of the mindare characteristically supple. If functionalism is correct, certain supple pat-terns define what it is to have a mind. But how can functionalism specifywhich mental patterns define the mind? One approach to answering this ques-tion would simply employ our common-sense understanding of characteristicmental dynamics (e.g., Lewis, 1972). On this common-sense approach tofunctionalism, the mind is defined by a set of patterns that themselves arecharacterized with ceteris paribus clauses (or their equivalent). The inherentimprecision of these ceteris paribus clauses, then, carries over to common-sense functionalism itself. In other words, common-sense functionalism as-serts something likes this: ‘‘The pattern of states definitive of minds hasroughly such-and-such form, but there are an indefinite number of exceptionsto this pattern and we must be content to remain ignorant about when andwhy exceptions happen and what form they can take.’’ Even if true, this isa disappointingly imprecise assertion about what minds are.

Cognitive science and artificial intelligence do provide precise accountsof the mind. By delineating the exceptional cases precisely enough (perhapsthrough use of heuristics), the dynamical patterns somewhat like those inthe mind can be directly represented in operating software. This strategyinterprets the functionalist slogan ‘‘mind as software’’ literally and attemptsto express precisely the supple dynamics of mind directly in an algorithm.If such an algorithm exists, then, according to the central thesis of functional-ism, any implementation of the algorithm would literally have a mind. How-ever, as we noted above, there has been a persistent pattern of failed attemptsin artificial intelligence to capture the supple adaptability characteristic ofmental dynamics. This history suggests the lesson that we cannot directlyrepresent the supple patterns characteristic of the mind as algorithms—mindis not software—and thus artificial-intelligence functionalism is unsound.

EMERGENT MODELS OF SUPPLE DYNAMICS 25

We should not conclude, however, that functionalism has no precise andtrue formulation. I suggested in the previous section that an emergent modelcan account for supple mental dynamics. If so, then this emergent modelcan provide an indirect, emergent description of those dynamical patternsthat functionalism uses to define the mind. This emergent functionalismwould then inherit the virtues possessed by the emergent model on whichit was based. Unlike common-sense functionalism, emergent functionalismcan be quite precise about what a mind is. For one thing, emergent functional-ism is relative to a specific microdynamical model of supple mental dynam-ics, and any given microdynamical model is a completely precise object.More to the point, the microdynamical model is an implicit but perfectlyprecise encapsulation of an exact macro-level dynamics in all its suppleness.Since the model is emergent, the full details of the description would berevealed only through observing simulations of the model, but repeated simu-lation could make the description as complete as desired. Emergent function-alism could adopt the slogan ‘‘mind as supple emergent dynamics.’’

One might worry that emergent functionalism amounts to nothing morethan the trivial functionalist claim that mental phenomena must have somematerial embodiment. Or one might worry that emergent functionalism con-travenes the functionalist’s guiding principle, that the mind’s definitive fea-tures abstract away from implementational details as much as possible. Boththese worries are unfounded. On the one hand, emergent functionalism isnot architecture independent; the central tenet of the view is that the mind’ssupple adaptive dynamics essentially requires a certain kind of emergentarchitecture. Hence, the emergent functionalist view is no mere reiteration ofthe mind’s material embodiment. On the other hand, emergent functionalismadmits the multiple realizability of the population of micro-level processesthat underlie mental dynamics and thereby admits the multiple realizabilityof the macro-level mental dynamics that emerge from them. So emergentfunctionalism views the mind maximally abstractly, as the functionalist char-acteristically does. However, emergent functionalism is careful not to ab-stract away the crucial emergent architecture that apparently accounts forthe mind’s supple dynamics.

REFERENCES

Ackley, D., & Littman, M. 1992. Interactions between evolution and learning. In C. Langton,C. Taylor, D. Farmer, & S. Rasmusen (Eds.), Artificial life II. Reading, MA: Addison–Wesley.

Anderson, J. A., & Rosenfeld, E. Eds. 1988. Neurocomputing: Foundations of research. Cam-bridge, MA: Bradford Books/MIT Press.

Bedau, M. A. 1992. Philosophical aspects of artificial life. In F. Varela & P. Bourgine (Eds.),Towards a practive of autonomous systems. Cambridge, MA: Bradford Books/MIT Press.

Bedau, M. A. 1994. The evolution of sensorimotor functionality. In P. Gaussier & J.-D. Nicoud(Eds.), From perception to action. Los Alamitos, CA: IEEE Computer Society Press.

Bedau, M. A. 1995. Three illustrations of artificial life’s working hypothesis. In W. Banzhaf &

26 MARK A. BEDAU

F. Eeckman (Eds.), Evolution and biocomputation—Computational models of evolution.Berlin: Springer.

Bedau, M. A. 1997. Weak emergence. In J. Tomberlin (Ed.), Philosophical perspectives: Mind,Causation, and World, Vol. 11. New York: Blackwell.

Bedau, M. A. 1996a. The nature of life. In M. Boden (Ed.), The philosophy of artificial life.New York: Oxford University Press.

Bedau, M. A. 1996b. The extent to which organisms construct their environments. AdaptiveBehavior, 4, 476–482.

Bedau, M. A., & Bahm, A. 1994. Bifurcation structure in diversity dynamics. In R. Brooks &P. Maes (Eds.), Artificial life IV. Cambridge, MA: Bradford Books/MIT Press.

Bedau, M. A., Giger, M., & Zwick, M. 1995. Adaptive diversity dynamics in static resourcemodels. Advances in Systems Science and Applications, 1, 1–6.

Bedau, M. A., & Packard, N. 1992. Measurement of evolutionary activity, teleology, and life.In C. Langton, C. Taylor, D. Farmer, & S. Rasmussen (Eds.), Artificial life II. Reading,MA: Addison–Wesley.

Bedau, M. A., Ronneburg, F., & Zwick, M. 1992. Dynamics of diversity in a simple modelof evolution. In R. Manner & B. Manderik (Eds.), Parallel problem solving from nature2. Amsterdam: Elsevier.

Bedau, M. A., & Seymour, R. 1994. Adaptation of mutation rates in a simple model of evolu-tion. In R. Stonier & X. H. Yu (Eds.), Complex systems—Mechanisms of adaptation.Amsterdam: IOS Press.

Beer, R. D. 1990. Intelligence as adaptive behavior: An experiment in computational neuro-ethology. Boston: Academic Press.

Belew, R. K., McInerney, J., & Schraudolph, N. N. 1992. Evolving networks: Using the genericalgorithm with connectionist learning. In C. Langton, C. Taylor, D. Farmer, & S. Rasmus-sen (Eds.), Artificial life II. Reading, MA: Addison–Wesley.

Block, N. Ed. 1980. Readings in philosophy of psychology, Vol. 1. Cambridge, MA: HarvardUniversity Press.

Brooks, R., & Maes, P. Eds. 1994. Artificial life VI. Cambridge, MA: Bradford Books/MITPress.

Cairns-Smith, A. G. 1985. Seven clues to the origin of life. Cambridge, England: CambridgeUniversity Press.

Chalmers, D. J., French, R. M., & Hofstadter, D. R. 1992. High-level perception, representa-tion, and analogy. Journal of Experimental and Theoretical Artificial Intelligence, 4, 185–211.

Churchland, P. M. 1981. Eliminative materialism and the propositional attitudes. Journal ofPhilosophy, 78, 67–90.

Churchland, P. S., & Sejnowski, T. J. 1992. The computational brain. Cambridge, MA: Brad-ford Books/MIT Press.

Cliff, D., Harvey, I., & Husbands, P. 1993. Explorations in evolutionary robotics. AdaptiveBehavior, 2, 73–110.

Dreyfus, H. 1979. What computers cannot do (2nd ed.). New York: Harper and Row.Farmer, J. D., Lapedes, A., Packard, N., & Wendroff, B. Eds. 1986. Evolution, games, and

learning: Models for adaptation for machines and nature. Amsterdam: North Holland.Fodor, J. A. 1981. Special sciences. In J. A. Fodor (Ed.), Representations. Cambridge, MA:

Bradford Books/MIT Press.French, R. M. 1995. The subtlety of sameness: A theory and computer model of analogy-

making. Cambridge, MA: Bradford Books/MIT PressHofstadter, D. R. 1985. Waking up from the boolean dream, or, subcognition as computation.

In D. R. Hofstadter (Ed.), Metamagical themas: Questing for the essence of mind andpattern. New York: Basic Books.

Holland, J. H. 1986. Escaping brittleness: The possibilities of general-purpose learning algo-rithms applied to parallel rule-based systems. In R. S. Michalski, J. G. Carbonell, &T. M. Mitchell (Eds.), Machine learning II. Los Altos, CA: Morgan Kaufmann.

EMERGENT MODELS OF SUPPLE DYNAMICS 27

Horgan, T., & Tienson, J. 1989. Representation without rules. Philosophical Topics, 17, 147–174.

Horgan, T., & Tienson, J. 1990. Soft laws. Midwest Studies in Philosophy, 15, 256–279.Langton, C. 1989a. Artificial life. In C. Langton (Ed.), Artificial life. Reading, MA: Addison–

Wesley.Langton, C. Ed. 1989b. Artificial life. Reading, MA: Addison–Wesley.Langton, C., Taylor, C. E., Farmer, J. D., & Rasmussen, S. Eds. 1992. Artificial Life II. Read-

ing, MA: Addison–Wesley.Lewis, D. 1972. Psychophysical and theoretical identifications. Australasian Journal of Philos-

ophy, 50, 249–258.Lycan, W. G. Ed. 1990. Mind and cognition: A reader. Cambridge, MA: Basil Blackwell.Maynard Smith, J. 1975. The theory of evolution (3rd ed). New York: Penguin.Mitchell, M. 1993. Analogy-making as perception. Cambridge, MA: Bradford Books/MIT

Press.Packard, N. 1989. Intrinsic adaptation in a simple model of evolution. In C. Langton (Ed.),

Artificial Life. Reading, MA: Addison-Wesley.Parisi, D., Nolfi, N., & Cecconi, F. 1992. Learning, behavior, and evolution. In F. Varela &

P. Bourgine (Eds.), Towards a practice of autonomous systems. Cambridge, MA: BradfordBooks/MIT Press.

Putnam, H. 1975. The nature of mental states. In H. Putnam (Ed.), Mind, language, and reality.Cambridge, England: Cambridge University Press.

Reynolds, C. W. 1987. Flocks, herds, and schools: A distributed behavioral model. ComputerGraphics, 21, 25–34.

Reynolds, C. W. 1992. Boids demos. In C. Langton (Ed.), Artificial life II video proceedings.Reading, MA: Addison–Wesley.

Rumelhart, D. E. & McClelland, J. L. 1986. Parallel distributed processing: Explorations inthe microstructure of cognition, 2 Vols. Cambridge, MA: Bradford Books/MIT Press.

Steels, L. 1994. The artificial life roots of artificial intelligence. Artificial life, 1, 75–110.Todd, P. M., & Miller, G. F. 1991. Exploring adaptive agency II: Simulating the evolution

of associative learning. In J.-A. Meyer & S. W. Wilson (Eds.), From animals to animats:Proceedings of the first international conference on the simulation of adaptive behavior.Cambridge, MA: Bradford Books/MIT Press.

Varela, F., & Bourgine, P. Eds. 1992. Towards a practice of autonomous systems. Cambridge,MA: Bradford Books/MIT Press.

Varela, F. J., Thompson, E., & Rosch, E. 1991. The embodied mind. Cambridge, MA: BradfordBooks/MIT Press.

Werner, G. M., & Dyer, M. G. 1992. Evolution of communication in artificial organisms. InC. Langton, C. Taylor, D. Farmer, & S. Rasmussen (Eds.), Artificial life II. Reading, MA:Addison–Wesley.