on being simple minded

16
AMERICAN PHILOSOPHICAL QUARTERLY Volume 41, Number 3, July 2004 205 ON BEING SIMPLE MINDED Peter Carruthers The question ‘Do fishes think?’ does not exist among our applications of language, it is not raised.—Wittgenstein 1. ON HAVING A MIND How simple minded can you be? Many philosophers would answer: no more simple than a language-using human be- ing. Many other philosophers, and most cognitive scientists, would allow that mam- mals, and perhaps birds, possess minds. But few have gone to the extreme of be- lieving that very simple organisms, such as insects, can be genuinely minded. 1 This is the ground that the present paper pro- poses to occupy and defend. It will argue that ants and bees, in particular, possess minds. So it will be claiming that minds can be very simple indeed. What does it take to be a minded organ- ism? Davidson (1975) says: you need to be an interpreter of the speech and behav- ior of another minded organism. Only crea- tures that speak, and that both interpret and are subject to interpretation, count as genu- inely thinking anything at all. McDowell (1994) says: you need to exist in a space of reasons. Only creatures capable of ap- preciating the normative force of a reason for belief, or a reason for action, can count as possessing beliefs or engaging in inten- tional action. And Searle (1992) says: you need consciousness. Only creatures that have conscious beliefs and conscious desires can count as having beliefs or desires at all. Such views seem to the present author to be ill-motivated. Granted, humans speak and interpret the speech of others. And granted, humans weigh up and evaluate reasons for belief and for action. And granted, too, humans engage in forms of thinking that have all of the hallmarks of consciousness. But there are no good rea- sons for insisting that these features of the human mind are necessary conditions of mindedness as such. Or so, at least, this pa- per will briefly argue now, and then take for granted as an assumption in what follows. Common sense has little difficulty with the idea that there can be beliefs and de- sires that fail to meet these demanding con- ditions. This suggests, at least, that those conditions are not conceptually necessary ones. Most people feel pretty comfortable in ascribing simple beliefs and desires to non-language-using creatures. They will say, for example, that a particular ape acts as she does because she believes that the mound contains termites and wants to eat them. And our willingness to entertain such

Upload: lamkhanh

Post on 10-Feb-2017

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: ON BEING SIMPLE MINDED

AMERICAN PHILOSOPHICAL QUARTERLY

Volume 41, Number 3, July 2004

205

ON BEING SIMPLE MINDED

Peter Carruthers

The question ‘Do fishes think?’ does not existamong our applications of language, it is notraised.—Wittgenstein

1. ON HAVING A MIND

How simple minded can you be? Manyphilosophers would answer: no moresimple than a language-using human be-ing. Many other philosophers, and mostcognitive scientists, would allow that mam-mals, and perhaps birds, possess minds.But few have gone to the extreme of be-lieving that very simple organisms, suchas insects, can be genuinely minded.1 Thisis the ground that the present paper pro-poses to occupy and defend. It will arguethat ants and bees, in particular, possessminds. So it will be claiming that mindscan be very simple indeed.

What does it take to be a minded organ-ism? Davidson (1975) says: you need tobe an interpreter of the speech and behav-ior of another minded organism. Only crea-tures that speak, and that both interpret andare subject to interpretation, count as genu-inely thinking anything at all. McDowell(1994) says: you need to exist in a spaceof reasons. Only creatures capable of ap-preciating the normative force of a reason

for belief, or a reason for action, can countas possessing beliefs or engaging in inten-tional action. And Searle (1992) says: youneed consciousness. Only creatures that haveconscious beliefs and conscious desires cancount as having beliefs or desires at all.

Such views seem to the present authorto be ill-motivated. Granted, humans speakand interpret the speech of others. Andgranted, humans weigh up and evaluatereasons for belief and for action. Andgranted, too, humans engage in forms ofthinking that have all of the hallmarks ofconsciousness. But there are no good rea-sons for insisting that these features of thehuman mind are necessary conditions ofmindedness as such. Or so, at least, this pa-per will briefly argue now, and then take forgranted as an assumption in what follows.

Common sense has little difficulty withthe idea that there can be beliefs and de-sires that fail to meet these demanding con-ditions. This suggests, at least, that thoseconditions are not conceptually necessaryones. Most people feel pretty comfortablein ascribing simple beliefs and desires tonon-language-using creatures. They willsay, for example, that a particular ape actsas she does because she believes that themound contains termites and wants to eatthem. And our willingness to entertain such

Page 2: ON BEING SIMPLE MINDED

206 / AMERICAN PHILOSOPHICAL QUARTERLY

thoughts seems unaffected by the extent towhich we think that the animal in questioncan appreciate the normative implicationsof its own states of belief and desire. More-over, most of us are now (post-Freud andthe cognitive turn in cognitive science)entirely willing to countenance the exist-ence of beliefs and desires that are not con-scious ones.

It isn’t only ordinary folk who think thatbeliefs and desires can exist in the absenceof the stringent requirements laid down bysome philosophers. Many cognitive scien-tists and comparative psychologists wouldagree. (Although sometimes, admittedly,the language of ‘belief’ and ‘desire’ getsomitted in deference to the sensibilities ofsome philosophical audiences.) There isnow a rich and extensive body of litera-ture on the cognitive states and processesof non-human animals (e.g., Walker, 1983;Gallistel, 1990; Gould and Gould, 1994).And this literature is replete with talk of in-formation-bearing conceptualized states thatguide planning and action-selection (beliefs),as well as states that set the ends planned forand that motivate action (desires).

True enough, it can often be difficult tosay quite what an animal believes or de-sires. And many of us can, on reflection,rightly be made to feel uncomfortablewhen using a that-clause constructed outof our human concepts to describe thethoughts of an animal. If we say of an apethat she believes that the mound containstermites, for example, then we can easilybe made to feel awkward about so doing.For how likely is it that the ape will havethe concept termite? Does the ape distin-guish between termites and ants, for in-stance, while also believing that both kindsbelong to the same super-ordinate category(insects)? Does the ape really believe thattermites are living things which excrete andreproduce? And so on.

These considerations give rise to an ar-gument against the very possibility of non-linguistic thought, which was initiallypresented by Davidson (1975). The argu-ment claims first, that beliefs and desiresare content-bearing states whose contentsmust be expressible in a sentential comple-ment (a that-clause). Then second, the ar-gument points out that it must always beinappropriate to use sentential comple-ments that embed our concepts when de-scribing the thoughts of an animal (giventhe absence of linguistic behavior of theappropriate sorts). In which case (putting thesetwo premises together) it follows that animalscannot be said to have thoughts at all.

The error in this argument lies in its as-sumption that thought contents must be speci-fiable by means of that-clauses, however. Forthis amounts to the imposition of a co-think-ing constraint on genuine thoughthood. Inorder for another creature (whether humanor animal) to be thinking a particular thought,it would have to be the case that someoneelse should also be capable of entertainingthat very thought, in such a way that it canbe formulated into a that-clause. But whyshould we believe this? For we know thatthere are many thoughts—e.g., some ofEinstein’s thoughts, or some of the thoughtsof Chomsky—that we may be incapable ofentertaining. And why should we neverthe-less think that the real existence of thosethoughts is contingent upon the capacity ofsomeone else to co-think them? PerhapsEinstein had some thoughts so sophisticatedthat there is no one else who is capable ofentertaining their content.

The common-sense position is that (inaddition to being formulated and co-thoughtfrom the inside, through a that-clause)thoughts can equally well be characterizedfrom the outside, by means of an indirectdescription. In the case of the ape dippingfor termites, for example, most of us would,

Page 3: ON BEING SIMPLE MINDED

ON BEING SIMPLE MINDED / 207

on reflection, say something like this: wedo not know how much the ape knows abouttermites, nor how exactly she conceptual-izes them, but we do know that she believesof the termites in that mound that they arethere, and we know that she wants to eatthem. And on this matter common sense andcognitive science agree. Through careful ex-perimentation scientists can map the bound-aries of a creature’s concepts, and canexplore the extent of its knowledge of thethings with which it deals (Bermúdez,2003). These discoveries can then be usedto provide an external characterization ofthe creature’s beliefs and goals, even if theconcepts in question are so alien to us thatwe couldn’t co-think them with the crea-ture in the content of a that-clause.

What does it take to be a minded organ-ism, then? We should say instead: you needto possess a certain core cognitive architec-ture. Having a mind means being a subjectof perceptual states, where those states areused to inform a set of belief states whichguide behavior, and where the belief statesin turn interact with a set of desire states inways that depend upon their contents, toselect from amongst an array of action sche-mata so as to determine the form of the be-havior. This sort of belief/desire architec-ture is one that humans share. It is repre-sented diagrammatically in Figure 1.

The crucial components of this accountare the beliefs and desires. For it is unlikelythat possession of perceptual states alone,where those states are used to guide a suiteof innate behavioral programs or fixed ac-tion schemata, could be sufficient for acreature to count as possessing a mind.Consider the architecture represented inFigure 2. The innate behaviors of manymammals, birds, reptiles, and insects canbe explained in such terms. The releasingfactors for an innate behavior will ofteninclude both bodily states (e.g., pregnancy)and perceptual information, with the per-ceptual states serving to guide the detailedperformance of the behavior in question,too.2 (Consider a butterfly landing neatlyon a leaf to lay her eggs.) But engaging ina suite of innately coded action patternsisn’t enough to count as having a mind,even if the detailed performance of thosepatterns is guided by perceptual informa-tion. And nor, surely, is the situation anydifferent if the action patterns are not in-nate ones, but are, rather, acquired habits,learned through some form of conditioning.

This paper will assume that it isn’tenough for an organism to count as havinga mind, either, that the animal in questionshould be merely interpretable as possess-ing beliefs and desires. For as Dennett(1987) has taught us, we can adopt whathe calls ‘the intentional stance’ in respectof even the most rigidly pre-programmedof behaviors. No, the architecture repre-sented in figure 1 needs to be construedrealistically. There needs to be a real dis-tinction between the belief-states and thedesire-states, in virtue of which they pos-sess their distinctive causal roles (guiding

Practicalreason

Motor

Belief

Percept

Desire

Actionschemata

Bodystates

Figure 1: The Core Architecture of a Mind

MotorPercept Actionschemata

Bodystates

Figure 2: An Unminded Behavioral Arcitecture

Page 4: ON BEING SIMPLE MINDED

208 / AMERICAN PHILOSOPHICAL QUARTERLY

and motivating action, respectively). Andthese states must, in addition, be both dis-crete, and structured in a way that reflectstheir semantic contents. And their detailedcausal roles, too (the ways in which par-ticular belief states and particular desirestates interact) must be sensitive to thosestructural features.

Dennett (1991) thinks that only languagecan provide these properties. It is only withthe first appearance of the Joycean ma-chine—the stream of inner verbalizationthat occupies so much of our wakinglives—that humans come to have discretestructured semantically-evaluable states,where the interactions of those states aresensitive to their structures. So althoughDennett might say that many animals pos-sess simple minds, all he really means isthat their behavior is rich enough to makeit worth our while to adopt the intentionalstance towards them. But he denies thatnon-human animals possess minds realis-tically construed, in the way that thepresent paper proposes to construe them.

To be minded means to be a thinker, then.And that means (it will hereafter be as-sumed) having distinct belief states anddesire states that are discrete, structured,and causally efficacious in virtue of theirstructural properties. These are demandingconditions on mindedness. The question is:how simple can an organism be while stillhaving states with these features?

2. MEANS/ENDS REASONING IN RATS

Dickinson and colleagues have con-ducted an elegant series of experiments,reported in a number of influential papers,providing evidence for the view that rats,at least, are genuine means/ends reasoners(Dickinson and Balleine, 1994; Dickinsonand Shanks, 1995; Dickinson and Balleine,2000). They argue that rats engage in goal-directed behavior, guided by beliefs about

causal contingencies, and by goals that areonly linked to basic forms of motivationvia learning. (We, too, have to monitor ourown reactions to learn what we want;Damasio, 1994.) These arguments areconvincing. But at the same time Dickinsonadvances a particular conception of what ittakes to be a truly minded creature. The lat-ter will need to be resisted if the positionthat insects have minds is to be defensible.

Here are some of the data (Dickinson andBalleine, 2000). Rats do not press a leverfor food any more when hungry than whennearly satiated, unless they have had ex-perience of eating that food when hungry.While the reward-value of the food issomething that they know, the increasedvalue that attaches to food when hungry issomething that they have to learn. Simi-larly, rats caused to have an aversion to onefood rather than another (via the injectionof an emetic shortly after eating the onefood but not the other) will thereafter stopperforming a trained action causally pairedwith just that food, in the absence of feed-back resulting from its actions (i.e., with-out receiving any rewards). However, theywill do so only if they have been re-pre-sented with the relevant food in the inter-val. They have to learn that they now havean aversion to the food in question. Butonce learned, they make appropriate deci-sions about which of two previouslylearned actions to perform—they knowwhich action will make them sick, andchoose accordingly.

Dickinson thinks that the data warrantascribing to rats a two-tier motivationalsystem much like our own. There is a ba-sic level of biological drives, which fixesthe reward value of experiences received.And there is an intentional level of repre-sented goals (e.g., to eat this stuff ratherthan that stuff), which has to be linked upto the basic level via learning to achieve

Page 5: ON BEING SIMPLE MINDED

ON BEING SIMPLE MINDED / 209

its motivating status. Others have madesimilar proposals to explain aspects of thehuman motivational system (Damasio,1994; Rolls, 1999).

In other experiments Dickinson and col-leagues have shown that rats are sensitiveto the degree of causal efficacy of theiractions (Dickinson and Charnock, 1985;Dickinson and Shanks, 1995). In particu-lar, the rat’s rate of action drops as thecausal connection between act and out-come falls. It even turns out that rats dis-play exactly the same illusions of causalityas do humans. The set-up in these experi-ments is that the probability of an eventoccurring (e.g., a figure appearing on a TVmonitor, for the humans) or a reward be-ing delivered (for the rats) is actually madeindependent of the action to be performed(pressing the space-bar, pressing a lever),while sometimes occurring in a way thathappens to be temporally paired with thataction. If (but only if) the unpaired out-comes are signaled in some way (by a co-incident sound, say), then both rats andhumans continue to believe (and to behave)as if the connection between act and out-come were a causal one.

The conclusion drawn from these andsimilar studies, then, is that rats are genu-ine means/ends reasoners. They possesslearned representations of the goals theyare trying to achieve (e.g., to receive a par-ticular type of food). And they have ac-quired representations of the relative causalefficacy of the actions open to them. Takentogether, these representations will leadthem (normally, when not fooled by cun-ning experimenters) to act appropriately.However, Dickinson and colleagues claimthat only creatures with these abilities cancount as being genuinely minded, or aspossessing a belief/desire cognitive archi-tecture of the sort depicted in figure 1(Heyes and Dickinson, 1990; Dickinson

and Balleine, 2000). Their reasoning is thatotherwise the animal’s behavior will beexplicable in terms of mere innate motorprograms or learned habits created throughsome form of associative conditioning.These further claims are unwarranted,however, as we shall see.

3. NON-ASSOCIATIVE LEARNING AND

NON-CAUSAL INSTRUMENTAL REASONING

Dickinson assumes that if behavior isn’tcaused by means/ends reasoning in theabove sense, then it must either be innateor the product of associative conditioning.But this assumption is false. The animalworld is rife with non-associative forms oflearning (Gallistel, 1990). Many animals(including the Tunisian desert ant) cannavigate by dead reckoning, for example.This requires the animal to compute thevalue of a variable each time it turns—in-tegrating the direction in which it has justbeen traveling (as calculated from the po-larization of the sun’s light in the sky;Wehner, 1994), with an estimate of the dis-tance traveled in that direction, to producea representation of current position in re-lation a point of origin (home base, say).This plainly isn’t conditioned behavior ofany sort; and nor can it be explained in termsof associative mechanisms, unless thosemechanisms are organized into an architec-ture that is then tantamount to algorithmicsymbol processing (Marcus, 2001).

Similarly, many kinds of animal willconstruct mental maps of their environ-ment which they use when navigating; andthey update the properties of the mapthrough observation without conditioning(Gould and Gould, 1994).3 Many animalscan adopt the shortest route to a target (e.g.,a source of food), guided by landmarks andcovering ground never before traveled.This warrants ascribing to the animals amental map of their environment. But they

Page 6: ON BEING SIMPLE MINDED

210 / AMERICAN PHILOSOPHICAL QUARTERLY

will also update the properties of the mapon an on-going basis.

Food-caching birds, for example, canrecall the positions of many hundreds orthousands of hidden seeds after somemonths; but they generally won’t return toa cache location that they have previouslyemptied. Similarly, rats can be allowed toexplore a maze on one day, finding a foodreward in both a small dark room and alarge white one. Next day they are givenfood in a (distinct) large white room, andshocked in a small dark one. When re-placed back in the maze a day later theygo straight to the white room and avoid thedark one (Gould and Gould, 1994). Havinglearned that dark rooms might deliver a shock,they have updated their representation of theproperties of the maze accordingly.4

Many animals make swift calculations ofrelative reward abundance, too, in a way thatisn’t explicable via conditioning. For ex-ample, a flock of ducks will distribute 1:1or 1:2 in front of two feeders throwing foodat rates of 1:1 or 1:2 within one minute ofthe onset of feeding, during which timemany ducks get no food at all, and very fewof them experience rewards from bothsources (Harper, 1982). Similarly, both pi-geons and rats on a variable reward sched-ule from two different alcoves will matchtheir behavior to the changing rates of re-ward. They respond very rapidly, closelytracking random variations in the immedi-ately preceding rates (Dreyfus, 1991; Markand Gallistel, 1994). They certainly are notaveraging over previous reinforcements, asassociationist models would predict.

Gallistel and colleagues have argued, in-deed, that even classical conditioning—thevery heartland of associationist general-purpose learning models—is better ex-plained in terms of the computationaloperations of a specialized rate-estimatingforaging system (Gallistel, 1990, 2000;

Gallistel and Gibbon, 2001). One simplepoint they make is that animals on a delayedreinforcement schedule in which the re-wards only become available once the con-ditioned stimulus (e.g., an illuminatedpanel) has been present for a certain amountof time, will only respond on each occasionafter a fixed proportion of the interval haselapsed. This is hard to explain if the ani-mals are merely building an associationbetween the illuminated panel and the re-ward. It seems to require, in fact, that theyshould construct a representation of the re-inforcement intervals, and act accordingly.

Moreover, there are many well-estab-lished facts about conditioning behaviorsthat are hard to explain on associationistmodels, but that are readily explicablewithin a computational framework. Forexample, delay of reinforcement has noeffect on rate of acquisition so long as theintervals between trials are increased bythe same proportions. And the number ofreinforcements required for acquisition ofa new behavior isn’t affected by interspers-ing a significant number of unreinforcedtrials. This is hard to explain if the ani-mals are supposed to be building associa-tions, since the unreinforced trials shouldsurely weaken those associations. But itcan be predicted if what the animals aredoing is estimating relative rates of return.For the rate of reinforcement per stimuluspresentation relative to the rate of rein-forcement in background conditions re-mains the same, whether or not significantnumbers of stimulus presentations remainunreinforced.

There are many forms of learning in ani-mals that are not simply learned associa-tions, then. And consequently there aremany animal behaviors that are not merehabits, but that do not involve representa-tions of causality, either. A bird who navi-gates to a previously established food

Page 7: ON BEING SIMPLE MINDED

ON BEING SIMPLE MINDED / 211

cache, using landmarks and a mental mapon which the location of the cache is rep-resented, certainly isn’t acting out of habit.But then nor does the action involve anyexplicit representation of the causality ofthe bird’s own behavior. The bird doesn’thave to think, ‘Flying in that direction willcause me to be in that place.’ It just has tointegrate its perception of the relevantlandmarks with the representations on itsmental map, then keying into action theflying-in-that-direction action schema. Thecausation can be (and surely is) left im-plicit in the bird’s action schemata andbehaviors, not explicitly represented in thebird’s reasoning.5

But for all that, why should such animalsnot count as exemplifying the belief/desirearchitecture depicted in Figure 1? If theanimal can put together a variety of goalswith the representations on a mental map,say, and act accordingly, then why shouldwe not say that the animal behaves as itdoes because it wants something and be-lieves that the desired thing can be foundat a certain represented location on themap? There seems to be no good reasonwhy we should not. (And nor is there anygood reason to insist that an animal onlyhas genuine desires if its goals are acquiredthrough learning.) Dickinson is surely mis-led in thinking that means/ends reasoninghas to involve representations of causal,rather than merely spatial, ‘means.’6

The difference between rats and manyother animals is just that rats (like humans)are generalist foragers and problem solv-ers. For this reason they have to learn thevalue of different foods, and they are es-pecially good at learning what acts willcause rewards to be delivered. But un-learned values can still steer intentionalbehavior, guided by maps and other learnedrepresentations of location and direction.

And so there can, surely, be minds that aresimpler than the mind of a rat.

4. INSECTS (1):INFLEXIBLE FIXED ACTION PATTERNS

How simple minded can you be? Do in-sects, in particular, have beliefs and de-sires? Few seem inclined to answer, ‘Yes.’In part this derives from a tradition ofthought (traceable back at least to Des-cartes) of doubting whether even highermammals such as apes and monkeys haveminds. And in part it derives from the mani-fest rigidity of much insect behavior. Thetradition will here be ignored. (Some as-pects of it have been discussed briefly insection 1 above.) But the rigidity requiressome comment.

We are all familiar with examples of thebehavioral rigidity of insects. Consider thetick, which sits immobile on its perch un-til it detects butyric acid vapor, whereuponit releases its hold (often enough fallingonto the bodies of mammals passing be-low, whose skins emit such a vapor); andthen when it detects warmth, it burrows.Or there are the caterpillars who follow thelight to climb trees to find food, in whomthe mechanism that enables them to do thisis an extremely simple one: when morelight enters one eye than the other, the legson that side of its body move slower, caus-ing the animal to turn towards the sourceof the light. When artificial lighting is pro-vided at the bottom of the trees, the cater-pillars climb downwards and subsequentlystarve to death. And when blinded in oneeye, these animals will move constantly incircles. How dumb can you be! Right?

Even apparently sophisticated and intel-ligent sequences of behavior can turn out,on closer investigation, to be surprisinglyrigid. There is the well-known example ofthe Sphex wasp, that leaves a paralyzedcricket in a burrow with its eggs, so that

Page 8: ON BEING SIMPLE MINDED

212 / AMERICAN PHILOSOPHICAL QUARTERLY

its offspring will have something to feedon when they hatch. When it captures acricket, it drags it to the entrance of theburrow, then leaves it outside for a momentwhile it enters, seemingly to check for in-truders. However, if an interfering experi-menter moves the cricket back a few incheswhile the wasp is inside, she repeats thesequence: dragging the insect to theburrow’s entrance, then entering brieflyonce more alone. And this sequence can bemade to ‘loop’ indefinitely many times over.

Or consider the Australian digger wasp,which builds an elaborate tower-and-bellstructure over the entrance of the burrowin which she lays her eggs (Gould andGould, 1994). (The purpose of the struc-ture is to prevent a smaller species of para-sitic wasp from laying her eggs in the sameburrow. The bell is of such a size and hungat such an angle, and worked so smoothon the inside, that the smaller wasp cannoteither reach far enough in, or gain enoughpurchase, to enter.) She builds the towerthree of her own body-lengths high. If thetower is progressively buried while shebuilds, she will keep on building. But onceshe has finished the tower and started onthe bell, the tower can be buried withouther noticing—with disastrous results, sincethe bell will then be half on the ground,and consequently quite useless. Similarly,if a small hole is drilled in the neck of thetower, she seems to lack the resources tocope with a minor repair. Instead she buildsanother tower and bell structure, con-structed on top of the hole.

In order to explain such behaviors, wedo not need to advance beyond the archi-tecture of Figure 2. The digger wasp wouldseem to have an innately represented se-ries of nested behavioral sub-routines, withthe whole sequence being triggered by itsown bodily state (pregnancy). Each sub-routine is guided by perceptual input, and

is finished by a simple stopping-rule. Butonce any given stage is completed, thereis no going back to make corrections orrepairs. The wasp appears to have no con-ception of the overall goal of the sequence,nor any beliefs about the respective con-tributions made by the different elements.If this were the full extent of the flexibil-ity of insect behaviors, then there wouldbe no warrant for believing that insectshave minds at all.

It turns out that even flexibility of be-havioral strategy isn’t really sufficient fora creature to count as having a mind, in-deed. For innate behavioral programs canhave a conditional format. It used to bethought, for example, that all male crick-ets sing to attract mates. But this isn’t so;and for good reason. For singing exposescrickets to predation, and also makes themtargets for parasitic flies who drop theireggs on them. Many male crickets adoptthe alternative strategy of waiting silentlyas a satellite of a singing male, and inter-cepting and attempting to mate with anyfemales attracted by the song. But the twodifferent strategies are not fixed. A previ-ously silent male may begin to sing if oneor more of the singing males is removed(Gould and Gould, 1994).

Admittedly, such examples suggest thatsomething like a decision process must bebuilt into the structure of the behavioralprogram. There must be some mechanismthat takes information about, for example,the cricket’s own size and condition, theratio of singing to non-singing males in thevicinity, and the loudness and vigor of theirsongs, and then triggers into action onebehavioral strategy or the other. But com-putational complexity of this sort, in themechanism that triggers an innate behav-ior, isn’t the same as saying that the insectacts from its beliefs and desires. And the

Page 9: ON BEING SIMPLE MINDED

ON BEING SIMPLE MINDED / 213

latter is what mindedness requires, we areassuming.

5. INSECTS (2):A SIMPLE BELIEF/DESIRE PSYCHOLOGY

From the fact that many insect behaviorsresult from triggering of innately repre-sented sequences of perceptually guidedactivity, however, it doesn’t follow that alldo. And it is surely no requirement onmindedness that every behavior of aminded creature should result from inter-actions of belief states with desire states.Indeed, some of our own behaviors are trig-gered fixed action sequences—think ofsneezing or coughing, for example, or ofthe universal human disgust reaction,which involves fixed movements of themouth and tongue seemingly designed toexpel noxious substances from the oralcavity. So it remains a possibility that in-sects might have simple minds as well asa set of triggerable innately representedaction sequences.

In effect, it remains a possibility that in-sects might exemplify the cognitive archi-tecture depicted in figure 1, only with anarrow added between ‘body states’ and ‘ac-tion schemata’ to subserve a dual causalroute to behavior (see Figure 3).7 In whatfollows it will be argued that this is indeedthe case, focusing on the minds of honeybees in particular. While this paper won’tmake any attempt to demonstrate this, itseems likely that the conclusion we reachin the case of honey bees will generalizeto all navigating insects. In which case

belief/desire cognitive architectures are ofvery ancient ancestry indeed.

Like many other insects, bees use a vari-ety of navigation systems. One is deadreckoning (integrating a sequence of direc-tions of motion with the velocity traveledin each direction, to produce a representa-tion of one’s current location in relation tothe point of origin). This in turn requiresthat bees can learn the expected positionof the sun in the sky at any given time ofday, as measured by an internal clock ofsome sort. Another mechanism permitsbees can recognize and navigate from land-marks, either distant or local (Collett andCollett, 2002). And some researchers haveclaimed that bees will, in addition, con-struct crude mental maps of their environ-ment from which they can navigate. (Themaps have to be crude because of the poorresolution of bee eyesight. But they maystill contain the relative locations of salientlandmarks, such as a large free-standingtree, a forest edge, or a lake shore.)

Gould (1986) reports, for example, thatwhen trained to a particular food source,and then carried from the hive in a darkbox to new release point, the bees will flydirectly to the food, but only if there is asignificant landmark in their vicinity whenthey are released. (Otherwise they fly offon the compass bearing that would previ-ously have led from the hive to the food.)While other scientists have been unable toreplicate these experiments directly,Menzel et al. (2000) found that bees thathad never foraged more than a few metersfrom the nest, but who were released atrandom points much further from it, wereable to return home swiftly. They argue thatthis either indicates the existence of a map-like structure, built during the bees’ initialorientation flights before they had begunforaging, or else the learned association ofvectors-to-home with local landmarks. But

Practicalreason

Motor

Belief

Percept

Desire

Actionschemata

Bodystates

Figure 3: A Mind with Dual Routes to Action

Page 10: ON BEING SIMPLE MINDED

214 / AMERICAN PHILOSOPHICAL QUARTERLY

either way, they claim, the spatial repre-sentations in question are allocentric ratherthan egocentric in character.

As is well known, honey bees dance tocommunicate information of various sortsto other bees. The main elements of thecode have now been uncovered throughpatient investigation (Gould and Gould,1988). They generally dance in a figure-of-eight pattern on a vertical surface in thedark inside the hive. The angle of move-ment through the center of the figure ofeight, as measured from the vertical, cor-responds to the angle from the expecteddirection of the sun for the time of day.(E.g., a dance angled at 30o to the right ofvertical at midday would represent 30o westof south, in the northern hemisphere.) Andthe number of ‘waggles’ made through thecenter of the figure of eight provides ameasure of distance. (Different bee speciesuse different innately fixed measures ofwaggles-to-distance.)

Honey bees have a number of innatelystructured learning mechanisms, in fact.They have one such mechanism for learn-ing the position of the sun in the sky forthe time of day. (This mechanism—like thehuman language faculty—appears to havean innate ‘universal grammar.’ All bees inthe northern hemisphere are born know-ing that the sun is in the east in the morn-ing, and in the west in the afternoon; Dyerand Dickinson, 1994.) And they have an-other such mechanism for learning, bydead reckoning, where things are in rela-tion to the hive. (Here it is ‘visual flow’that seems to be used as the measure ofdistance traveled; Srinivasan et al., 2000.)They may have yet another mechanism forconstructing a mental map from a combi-nation of landmark and directional infor-mation. (At the very least they have thecapacity for learning to associate land-marks with vectors pointing to the hive;

Menzel et al., 2000.) And they have yetanother mechanism again for decoding thedances of other bees, extracting a repre-sentation of the distance and direction of atarget (generally nectar, but also pollen,water, tree sap, or the location of a poten-tial nest site; Seeley, 1995).

Although basic bee motivations are, nodoubt, innately fixed, the goals they adopton particular occasions (e.g., whether ornot to move from one foraging patch to an-other, whether to finish foraging and re-turn to the hive, and whether or not todance on reaching it) would appear to beinfluenced by a number of factors (Seeley,1995). Bees are less likely to dance for di-lute sources of food, for example; they areless likely to dance for the more distant oftwo sites of fixed value; and they are lesslikely to dance in the evening or when thereis an approaching storm, when there is asignificant chance that other bees might notbe capable of completing a return trip. Andcareful experimentation has shown thatbees scouting for a new nest site will weighup a number of factors, including cavityvolume, shape, size and direction of en-trance, height above ground, dampness,draftiness, and distance away. Moreover,dancing scouts will sometimes take timeout to observe the dances of others andcheck out their discoveries, making a com-parative assessment and then dancing ac-cordingly (Gould and Gould, 1988).

Bees do not just accept and act on anyinformation they are offered, either. On thecontrary, they evaluate it along a numberof dimensions. They check the nature andquality of the goal being offered (normallyby sampling it, in the case of food). Andthey factor in the distance to the indicatedsite before deciding whether or not to flyout to it. Most strikingly, indeed, it hasbeen claimed that bees will also integratecommunicated information with the rep-

Page 11: ON BEING SIMPLE MINDED

ON BEING SIMPLE MINDED / 215

resentations on their mental map, reject-ing even rich sources of food that are be-ing indicated to exist in the middle of alake, for example.8

How should these bee capacities be ex-plained? Plainly the processes in questioncannot be associative ones, and these formsof bee learning are not conditioned re-sponses to stimuli. Might the bee behav-iors be explained through the existence ofsome sort of ‘subsumption architecture’(Brooks, 1986)? That is, instead of havinga central belief/desire system of the sortdepicted in figure 3, might bees have asuite of input-to-output modular systems,one for each different type of behavior?This suggestion is wildly implausible. For(depending on how one counts behaviors)there would have to be at least five of theseinput-to-output modules (perhaps dozens,if each different ‘goal’ amounts to a dif-ferent behavior), each of which would haveto duplicate many of the costly computa-tional processes undertaken by the others.There would have to be a scouting-from-the-hive module, a returning-to-the-hivemodule, a deciding-to-dance-and-dancingmodule, a returning-to-food-source mod-ule, and a perception-of-dance-and-flying-to-food-source module. Within each ofthese systems essentially the same com-putations of direction and distance infor-mation would have to be undertaken.

The only remotely plausible interpreta-tion of the data is that honey bees have asuite of information-generating systemsthat construct representations of the rela-tive directions and distances between avariety of substances and properties and thehive,9 as well as a number of goal-gener-ating systems taking as inputs body statesand a variety of kinds of contextual infor-mation, and generating a current goal asoutput. Any one of these goal states canthen in principle interact with any one of

the information states to create a poten-tially unique behavior, never before seenin the life of that particular bee. It appears,indeed, that bees exemplify the architec-ture depicted in figure 3. In which case,there can be minds that are capable of justa few dozen types of desire, and that arecapable of just a few thousand types ofbelief.10 How simple minded can you be?Pretty simple.

6. STRUCTURE-DEPENDENT INFERENCE

Recall, however, that the conditions ongenuine mindedness that we laid down insection 1 included not just a distinctionbetween information states and goal states,but also that these states should interactwith one another to determine behavior inways that are sensitive to their composi-tional structures. Now, on the face of it thiscondition is satisfied. For if one and thesame item of directional information canbe drawn on both to guide a bee in searchof nectar and to guide the same bee return-ing to the hive, then it would seem that thebee must be capable of something resem-bling the following pair of practical infer-ences (using BEL to represent belief, DESto represent desire, MOVE to representaction—normally flight, but also walkingfor short distances—and square bracketsto represent contents).

(1) BEL [nectar is 200 meters north of hive]BEL [here is at hive]DES [nectar]MOVE [200 meters north]

(2) BEL [nectar is 200 meters north of hive]BEL [here is at nectar]DES [hive]MOVE [200 meters south]

These are inferences in which the con-clusions depend upon structural relationsamongst the premises.11

Page 12: ON BEING SIMPLE MINDED

216 / AMERICAN PHILOSOPHICAL QUARTERLY

It might be suggested that we havemoved too swiftly, however. For perhapsthere needn’t be a representation of thegoal substance built explicitly into thestructure of the directional information-state. To see why this might be so, noticethat bees do not represent what it is thatlies in the direction indicated as part of thecontent of their dance; and nor do observ-ers acquire that information from the danceitself. Rather, dancing bees display thevalue on offer by carrying it; and observ-ing bees know what is on offer by sam-pling some of what the dancing bee iscarrying.

It might be claimed, then, that what re-ally happens is this. An observing beesamples some of the dancing bee’s load,and discovers that it is nectar, say. This keysthe observer into its fly-in-the-direction-indicated sub-routine. The bee computesthe necessary information from the detailsof the dance, and flies off towards the in-dicated spot. If it is lucky, it then discov-ers nectar-bearing flowers when it getsthere and begins to forage. But at no pointdo the contents of goal-states and the con-tents of the information-states need to in-teract with one another.

This idea won’t wash, however. Althoughthe presence of nectar isn’t explicitly rep-resented in the content of the dance, it doesneed to be represented in the content ofboth the dancer’s and the observer’s belief-states. For recall that bees do not danceeven for a rich source of nectar that is toofar away (Gould and Gould, 1988). Thedistance-information therefore needs to beintegrated with the substance-informationin determining the decision to dance.Equally, observers ignore dances indicat-ing even rich sources of nectar if the indi-cated distances are too great. So again, thedistance information derived from thedance needs to be integrated with the value

information before a decision can bereached. So we can conclude that not onlydo bees have distinct information statesand goal states, but that such states inter-act with one other in ways that are sensi-tive to their contents in determiningbehavior. In which case bees really do ex-emplify the belief/desire architecture de-picted in Figure 3, construed realistically.

One final worry remains, however. Dothe belief and desire states in question sat-isfy what Evans (1983) calls ‘the General-ity Constraint’? This is a very plausibleconstraint on genuine (i.e., composition-ally structured) concept possession. It tellsus that any concept possessed by a thinkermust be capable of combining appropri-ately with any other concept possessed bythe same thinker. If you can think that a isF and you can think that b is G, then youmust also be capable of thinking that a isG and that b is F. For these latter thoughtsare built out of the very same componentsas the former ones, only combined togetherwith one another differently.

Now, bees can represent the spatial rela-tionships between nectar and hive, andbetween pollen and hive; but are they ca-pable of representing the spatial relation-ships between nectar and pollen? Are beescapable of thoughts of the following form?

BEL [nectar is 200 meters north of pollen]

If not, it may be said, then bees cannot becounted as genuine concept-users; and sothey cannot count as genuine believer/desirers, either.

It is possible that this particular exampleisn’t a problem. Foragers returning to anectar site to find it almost depleted mightfly directly to any previously discoveredforaging site that is nearby; including onecontaining pollen rather than nectar, if pol-len is in sufficient demand back at the hive.But there will, almost certainly, be otherrelationships that are never explicitly rep-

Page 13: ON BEING SIMPLE MINDED

ON BEING SIMPLE MINDED / 217

resented. It is doubtful, for example, thatany scout will ever explicitly represent therelations between one potential nest siteand another (as opposed to some such be-lief being implicit in the information con-tained in a mental map). So it is doubtfulwhether any bee will ever form an explicitthought of the form:

BEL [cavity A is 200 meters north of cavity B]

Rather, the bees will form beliefs about therelations between each site and the colony.

Such examples are not really a problemfor the Generality Constraint, however.From the fact that bees never form beliefsof a certain kind, it doesn’t follow that theycannot. (Or at least, this doesn’t follow insuch a way as to undermine the claim thattheir beliefs are compositionally struc-tured.) Suppose, first, that bees do con-struct genuine mental maps of theirenvironment. Then it might just be thatbees are only ever interested in the rela-tionships amongst potential new nest sitesand the existing colony, and not betweenthe nest sites themselves. But the same sortof thing is equally true of human beings.Just as there are some spatial relationshipsthat might be implicit in a bee’s mentalmap, but never explicitly believed; so thereare some things implicit in our beliefsabout the world, but never explicitly en-tertained, either, because they are of nointerest. Our beliefs, for example, collec-tively entail that mountains are less easyto eat than rocks. (We could at least pounda rock up into powder, which we mighthave some chance of swallowing.) But un-til finding oneself in need of a philosophi-cal example, this isn’t something onewould ever have bothered to think. Like-wise with the bees. The difference is justthat bees do not do philosophy.

Suppose, on the other hand, that bees donot construct mental maps; rather theylearn a variety of kinds of vector informa-tion linking food sources (or nest sites) andthe hive, and linking landmarks and thehive. Then the reason why no bee will evercome to believe that one nest-cavity standsin a certain relation to another will havenothing to do with the alleged absence ofgenuine compositional structure from thebees’ belief states, but will rather resultfrom the mechanisms that give rise to newbee beliefs, combined with the bees’ lim-ited inferential abilities.

For here once again, part of the explana-tion is just that bees are only interested in asmall sub-set of the spatial relations avail-able to them. They only ever compute andencode the spatial relationships betweendesired substances and the hive, or betweenlandmarks and the hive, not amongst thelocations of those substances or those land-marks themselves. And nor do they have theinferential abilities needed to work out thevector and distance relations amongst land-marks from their existing spatial beliefs. Butthese facts give us no reason to claim thatbees do not really employ compositionallystructured belief states, which they integratewith a variety of kinds of desire state in sucha way as to select an appropriate behavior.And in particular, the fact that the bees lackthe ability to draw inferences freely and pro-miscuously amongst their belief statesshould not exclude them from having anybelief states at all. Which is to say: thesefacts about the bees’ limitations give us noreason for denying that bees possess simpleminds.

How simple minded can you be, then?Pretty simple; and a good deal more simplethan most philosophers seem prepared toallow.12

University of Maryland

Page 14: ON BEING SIMPLE MINDED

218 / AMERICAN PHILOSOPHICAL QUARTERLY

NOTES

1. One notable philosophical exception is Tye (1997). While the present paper shares some ofTye’s conclusions (specifically, that honey bees have beliefs and desires) it offers different argu-ments. And the main focus of Tye’s paper is on the question whether insects have phenomenallyconscious experiences. This is quite another question from ours. Whether bees are belief/desirereasoners is one thing; whether they are phenomenally conscious is quite another. For a negativeverdict on the latter issue, see Carruthers, 2000. It should also be stressed that the main questionbefore us in this paper is quite different from the ones that formed the focus of Bennett’s famousdiscussion of honey bee behavior (Bennett, 1964). Bennett argued that bee signaling systems arenot a genuine language, and that honey bees are not genuinely rational in the fully-fledged senseof ‘rationality’ that is distinctive of human beings. (His purpose was to build towards an analysisof the latter notion.) Our concern, rather, is just with the question whether bees have beliefs anddesires. (On this question, Bennett expressed no clear opinion.)

2. Do informational states that do not interact with beliefs and desires deserve to be counted asgenuine perceptions? Common sense suggests that they do. Ask someone whether they think thatthe fish sees the net sweeping towards it through the water and they will answer, ‘Of course,because it swims deftly in such a way at to avoid it.’ But ask whether the fish has thoughts aboutthe net or its own impending capture, and many will express skepticism. Moreover, the ‘dualsystems’ theory of vision (Milner and Goodale, 1995; Clark, 2002) suggests that the perceptualstates that guide our own movements on-line are not available for belief or desire formation,either. Yet we wouldn’t deny, even so, that our detailed movements occur as they do because wesee the shapes and orientations of the objects surrounding us.

3. Other animals navigate by the stars, or by using the Earth’s magnetic field. Night-migratingbirds study the sky at night when they are chicks in the nest, thereby extracting a representationof the center of rotation of the stars in the night sky. When they later leave the nest, they use thisinformation to guide them when flying south (in fall in the northern hemisphere) and again whenflying north (in spring in the northern hemisphere). The representations in question are learned,not innate, as can be demonstrated by rearing chicks in a planetarium where they observe anartificially generated center of night-sky rotation.

4. Similarly, western scrub jays will update the representations on their map of cached foods,and behave accordingly, once they learn independently of the different decay rates of differenttypes of food. Thereafter they access the faster-decaying caches first. See Clayton et al., 2003.

5. The same is surely true of humans. When one wants a beer and recalls that there is a beer in thefridge, and then sets out for the kitchen, one doesn’t explicitly represent one’s walking as thecause of one getting the beer. Rather, once one knows the location, one just starts to walk.

6. Bermúdez (2003), too, claims that there can be no genuine decision-making (and so no realbelief/desire psychology) in the absence of representations of instrumental causality. And thisseems to be because he, too, assumes that there are no forms of learning between classical kindsof conditioning and genuine causal belief. But given the reality of a variety of forms of spatiallearning (reviewed briefly above), it seems unmotivated to insist that sophisticated navigationbehaviors are not really guided by decision-making, merely on the grounds that there are nocausal beliefs involved.

7. Note that if the ‘dual visual systems’ hypothesis of Milner and Goodale (1995) generalizes toother perceptual modalities and to other species, then there should really be two distinct ‘perceptboxes’ in figure 3, one feeding into conceptual thought and decision making, and another feedinginto the action schemata so as to provide fine-grained on-line guidance of movement.

Page 15: ON BEING SIMPLE MINDED

ON BEING SIMPLE MINDED / 219

8. In these experiments two groups of bees were trained to fly to weak sugar solutions equidis-tant from the hive, one on a boat in the middle of a lake, and one on the lake shore. When bothsugar solutions were increased dramatically, both sets of bees danced on returning to the hive.None of the receiving bees flew out across the lake. But this wasn’t just a reluctance to fly overwater. In experiments where the boat was moved progressively closer and closer to the far lakeshore, more and more receiving bees were prepared to fly to it. See Gould and Gould, 1988.

9. Note that this satisfies the two criteria laid down by Bennett (1964, §4) for a languagelesscreature to possess beliefs. One is that the creature should be capable of learning. And the otheris that the belief-states should be sensitive to a variety of different kinds of evidence.

10. The bee’s capacity for representing spatial relations is by no means unlimited. There is prob-ably an upper limit on distances that can be represented. And discriminations of direction arerelatively crude (at least, by comparison with the almost pin-point accuracy of the Tunisian desertant; see Wehner and Srinivasan, 1981). However, bees are also capable of forming a limitedrange of other sorts of belief, too. They come to believe that certain odors and colors signalnectar or pollen, for example (Gould and Gould, 1988).

11. Is there some way of specifying in general terms the practical inference rule that is at workhere? Indeed there is. The rule might be something like the following: BEL [here is at x, F is mmeters and no from x], DES [F] ® MOVE [m meters at no]. This would require the insertion of anextra premise into argument (2) above, transforming the first premise into the form, BEL [hive is200 meters south of nectar].

12. The author is grateful to his students from a recent graduate seminar on the architecture ofthe mind for their skeptical discussion of this material, and to Ken Cheng, Anthony Dickinson,Keith Frankish, Robert Kirk, and Mike Tetzlaff for comments on an earlier draft.

REFERENCES

Bennett, J. 1964. Rationality: An Essay Towards an Analysis. London: Routledge.Bermúdez, J. 2003. Thinking without Words. Oxford: Oxford University Press.Brooks, R. 1986. “A Robust Layered Control System for a Mobile Robot.” IEEE Journal of

Robotics and Automation, RA-2: 14–23.Carruthers, P. 2000. Phenomenal Consciousness: A Naturalistic Theory. Cambridge: Cambridge

University Press.Clark, A. 2002. “Visual Experience and Motor Action: Are the Bonds Too Tight?” Philosophical

Review, vol. 110, pp. 495–520.Clayton, N., N. Emory, and A. Dickinson. 2003. “The Rationality of Animal Memory: The Cog-

nition of Caching.” In Animal Rationality, ed. S. Hurley. Oxford: Oxford University Press.Collett, T., and M. Collett. 2002. “Memory Use in Insect Visual Navigation.” Nature Reviews:

Neuroscience, vol. 3, pp. 542–552.Damasio, A. 1994. Descartes’ Error. Picador Press.Davidson, D. 1975. “Thought and Talk.” In Mind and Language, ed. S. Guttenplan. Oxford:

Oxford University Press.Dennett, D. 1987. The Intentional Stance. Cambridge, Mass.: MIT Press.

. 1991. Consciousness Explained. Allen Lane.Dickinson, A., and B. Balleine. 1994. “Motivational Control of Goal-Directed Action.” Animal

Learning and Behavior, vol. 22, pp. 1–18.

Page 16: ON BEING SIMPLE MINDED

220 / AMERICAN PHILOSOPHICAL QUARTERLY

. 2000. “Causal Cognition and Goal-Directed Action. In The Evolution of Cognition, ed.C. Heyes and L. Huber. Cambridge, Mass.: MIT Press.

Dickinson, A., and D. Charnock. 1985. “Contingency Effects with Maintained Instrumental Re-inforcement.” Quarterly Journal of Experimental Psychology, vol. 37B, pp. : 397–416.

Dickinson, A., and D. Shanks. 1995. “Instrumental Action and Causal Representation.” In CausalCognition, ed. D. Sperber, D. Premack, and A. Premack. Oxford: Oxford University Press.

Dreyfus, L. 1991. “Local Shifts in Relative Reinforcement Rate and Time Allocation on Concur-rent Schedules.” Journal of Experimental Psychology, vol. 17, pp. 486–502.

Dyer, F., and J. Dickinson. 1994. “Development of Sun Compensation by Honeybees.” Proceed-ings of the National Academy of Science, vol. 91, pp. 4471–4474.

Evans, G. 1982. The Varieties of Reference. Oxford: Oxford University Press.Gallistel, R. 1990. The Organization of Learning. Cambridge, Mass.: MIT Press.

. 2000. “The Replacement of General-Purpose Learning Models with Adaptively Spe-cialized Learning Modules.” In The New Cognitive Neurosciences (second edition), ed.M.Gazzaniga. Cambridge, Mass.: MIT Press.

Gallistel, R., and J. Gibson. 2001. “Time, Rate and Conditioning.” Psychological Review, vol.108, pp. 289–344.

Gould, J. 1986. “The Locale Map of Bees: Do Insects Have Cognitive Maps?” Science, vol. 232,pp. 861–863.

Gould, J., and C. Gould. 1988. The Honey Bee. Scientific American Library.. 1994. The Animal Mind. Scientific American Library.

Harper, D. 1982. “Competitive Foraging in Mallards.” Animal Behavior, vol. 30, pp. 575–584.Heys, C., and A. Dickinson. 1990. “The Intentionality of Animal Action.” Mind and Language,

vol. 5, pp. 87–104.Marcus, G. 2001. The Algebraic Mind. Cambridge, Mass.: MIT Press.Mark, T., and R. Gallistel. 1994. “The Kinetics of Matching.” Journal of Experimental Psychol-

ogy, vol. 20, pp. 1–17.McDowell, J. 1994. Mind and World. Cambridge, Mass.: MIT Press.Menzel, R., R. Brandt, A. Gumbert, B. Komischke, and J. Kunze. 2000. “Two Spatial Memories for

Honeybee Navigation.” Proceedings of the Royal Society: London B, vol. 267, pp. 961–966.Milner, D., and M. Goodale. 1995. The Visual Brain in Action. Oxford: Oxford University Press.Rolls, E. 1999. Emotion and the Brain. Oxford: Oxford University Press.Searle, J. 1992. The Rediscovery of the Mind. Cambridge, Mass.: MIT Press.Seeley, T. 1995. The Wisdom of the Hive: The Social Physiology of Honey Bee Colonies. Cam-

bridge, Mass.: Harvard University Press.Tye, M. 1997. “The Problem of Simple Minds.” Philosophical Studies, vol. 88, pp. 289–317.Walker, S. 1983. Animal Thought. London: Routledge.Wehner, R. 1994. “The Polarization-Vision Project.” In The Neural Basis of Behavioral Adapta-

tions, ed. K. Schildberger and N. Elsner. Gustav Fischer.Wehner, R., and M. Srinivasan. 1981. “Searching Behavior of Desert Ants.” Journal of Compara-

tive Physiology, vol. 142, pp. 315–338.