superstition and belief as inevitable by-products of an adaptive

13
Superstition and Belief as Inevitable By-products of an Adaptive Learning Strategy Jan Beck Theodor-Boveri-lnstitute for Biosciences, Wiirzburg Wolfgang Forstmeier Max Planck Institute for Ornithology The existence of superstition and religious beliefs in most, if not all, human socie- ties is puzzling for behavioral ecology. These phenomena bring about various fit- ness costs ranging from burial objects to celibacy, and these costs are not outweighed by any obvious benefits. In an attempt to resolve this problem, we present a verbal model describing how humans and other organisms learn from the observation of coincidence (associative learning). As in statistical analysis, learning organisms need rules to distinguish between real patterns and randomness. These rules, which we argue are equivalent to setting the level of tx for rejection of the null hypothesis in statistics, are governed by risk management as well as by comparison to previous experiences. Risk management means that the cost of a possible type I error (super- stition) has to be traded off against the cost of a possible type II error (ignorance). This trade-off implies that the occurrence of superstitious beliefs is an inevitable consequence of an organism's ability to learn from observation of coincidence. Com- parison with previous experiences (as in Bayesian statistics) improves the chances of making the fight decision. While this Bayesian approach is found in most learn- ing organisms, humans have evolved a unique ability to judge from experiences whether a candidate subject has the power to mechanistically cause the observed effect. Such "strong" causal thinking evolved because it allowed humans to under- stand and manipulate their environment. Strong causal thinking, however, involves the generation of hypotheses about underlying mechanisms (i.e., beliefs). Assuming that natural selection has favored individuals that learn quicker and more success- fully than others owing to (1) active search to detect patterns and (2) the desire to explain these patterns mechanistically, we suggest that superstition has evolved as a by-product of the first, and that belief has evolved as a by-product of the second. Received June 9, 2005; accepted September 2, 2005; final version received December 5, 2005 Address all correspondence to Wolfgang Forstmeiel; Max Planck Institute for Ornithology, Postfach 1564, 82305 Starnberg (SeewieselO, Germany. E-mail: [email protected] Human Nature, Spring 2007, Vol. 18, No. 1, pp. 35-46. 1045-6767/98/$6.00 = .15

Upload: others

Post on 04-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Superstition and Belief as Inevitable By-products of an Adaptive

Learning Strategy Jan Beck

Theodor-Boveri-lnstitute for Biosciences, Wiirzburg

Wolfgang Forstmeier Max Planck Institute for Ornithology

The existence of superstition and religious beliefs in most, if not all, human socie- ties is puzzling for behavioral ecology. These phenomena bring about various fit- ness costs ranging from burial objects to celibacy, and these costs are not outweighed by any obvious benefits. In an attempt to resolve this problem, we present a verbal model describing how humans and other organisms learn from the observation of coincidence (associative learning). As in statistical analysis, learning organisms need rules to distinguish between real patterns and randomness. These rules, which we argue are equivalent to setting the level of tx for rejection of the null hypothesis in statistics, are governed by risk management as well as by comparison to previous experiences. Risk management means that the cost of a possible type I error (super- stition) has to be traded off against the cost of a possible type II error (ignorance). This trade-off implies that the occurrence of superstitious beliefs is an inevitable consequence of an organism's ability to learn from observation of coincidence. Com- parison with previous experiences (as in Bayesian statistics) improves the chances of making the fight decision. While this Bayesian approach is found in most learn- ing organisms, humans have evolved a unique ability to judge from experiences whether a candidate subject has the power to mechanistically cause the observed effect. Such "strong" causal thinking evolved because it allowed humans to under- stand and manipulate their environment. Strong causal thinking, however, involves the generation of hypotheses about underlying mechanisms (i.e., beliefs). Assuming that natural selection has favored individuals that learn quicker and more success- fully than others owing to (1) active search to detect patterns and (2) the desire to explain these patterns mechanistically, we suggest that superstition has evolved as a by-product of the first, and that belief has evolved as a by-product of the second.

Received June 9, 2005; accepted September 2, 2005; final version received December 5, 2005

Address all correspondence to Wolfgang Forstmeiel; Max Planck Institute for Ornithology, Postfach 1564, 82305 Starnberg (Seewiesel O, Germany. E-mail: [email protected]

Human Nature, Spring 2007, Vol. 18, No. 1, pp. 35-46. 1045-6767/98/$6.00 = .15

36 Human Nature / Spring 2007

KEY WORDS: Behavioral ecology; Causal thinking; Evolutionary psychology; Hu- man behavior; Learning

F or a behavioral ecologist it is puzzling why superstition and religious beliefs are found in probably all human societies (e.g., Frazer 1922). It is not clear how

the costs that often result from such beliefs (e.g., burial objects, celibacy; see Vyse 1997:205 for more examples) are outweighed by benefits. Although religious sys- tems are restricted to humans, superstition is a phenomenon that presumably exists in all kinds of organisms capable of associative learning. Skinner's work (Morse and Skinner 1957; Skinner 1948) gives a prominent example of superstition in ani- mals. Pigeons that received food at random time intervals started to perform stereo- typed movements, apparently expecting that their behavior would have an influence on the food-releasing mechanism. Examples of such movements were turning coun- terclockwise about the cage; thrusting the head into one of the upper comers of the cage; a "tossing" movement, as if placing the head beneath an invisible bar and lifting it repeatedly; and so on. Although some details of Skinner's interpretation have been criticized (reviewed by Vyse 1997), his general results were confirmed by studies on pigeons (e.g., Killeen 1978) as well as on humans (Catania and Cutts 1963; Ninness and Ninness 1999; Ono 1987; Rudski et al. 1999 and references therein; Wagner and Morris 1987).

As the example of Skinner's pigeons indicates, we define superstition as a wrong idea about external reality (see Dawkins 1998; Higgins et al. 1989; Rudski et al. 1999; Wagner and Morris 1987). This rather broad definition is widely accepted in the psychological literature because it applies irrespective of whether superstitious beliefs are self-created by an individual (e.g., Skinner's pigeons), transmitted cul- turally (e.g., avoiding the number 13), or even genetically inherited (e.g., fear of non-poisonous snakes).

Superstition can originate where organisms learn from observation of coinci- dence (associative learning). For example, serious stomachache and sickness fol- lowing the consumption of a certain kind of food usually leads us to avoid that kind of food in the future (this was also observed in animal experiments: Garcia and Koelling 1966; Matsuzawa et al. 1983). Organisms learning from observation of coincidence inevitably face a trade-off between a failure to detect a pattern that actually exists (colloquial: ignorance) and concluding that there is a pattem when actually there is just randomness (colloquial: superstition). As first pointed out by Dawkins (1998:160-179) this trade-off can be compared to the type II and type I errors in statistics: making a type II error means being ignorant, while making a type I error means being superstitious. The same problem is fundamental to signal detection theory (Green and Swets 1974). This theory is concerned with the prob- lem of how we use the input from our sensory organs to discriminate between pat- tern and background noise.

In statistics, the acceptable probability of making a type I error (conventionally known as or) has to be set a priori in order to make a decision in a given situation

Superstition, Belief, and Learning 37

(i.e., reject the null hypothesis i fp _< 00. Individuals setting high thresholds for acceptance (= low probability of type I error, e.g., ~ = 0.01) mainly suffer costs from being ignorant while others with a low threshold (e.g., ~ = 0.1) more often encounter the disadvantages of superstition. Reducing o~ from, say, 0.1 to 0.01 in order to judge a given set of data increases the risk of a type II error (known as 9) while reducing the risk of a type I error. It is impossible to diminish one risk with- out raising the other. The same constraint that applies to statistics is present in the everyday life of any organism capable of learning: One always has to choose be- tween taking the risk of being ignorant or taking the risk of being superstitious. In our example of food and sickness, it would be ignorant not to realize the association between a poisonous food item and sickness after several trials. Superstition, on the other hand, would be the avoidance of a harmless food item just because of a casual coincidence of its consumption and a sickness stemming from completely different reasons.

The aim of the present paper is to develop a verbal model of associative learning that may explain why superstition and belief have evolved, and why they still per- sist. The model is also used to explain in which situations superstition is most likely to occur and how this may change over historical periods of time.

A GENERAL MODEL OF LEARNING

Here we present a graphical model (Figure t) that is applicable to learning pro- cesses in a wide range of organisms, including humans. Although our explanations and examples are focused mainly on humans, there are only a few aspects where a clear distinction between human and animal behavior has to be made.

P-values (the probability with which coincidences come about by chance) are compared with threshold values (o0 in order to decide whether observed patterns are meaningful or random. It is not of major concern here how these error prob- abilities are assessed (e.g., Cheng 1997; McGonigle and Chalmers 1998). The piv- otal point of the model is the mechanism for setting the threshold (c 0. Two factors calibrate the value of or: ( 1 ) a trade-off between costs of possible type ! and type II errors, and (2) previous experiences in comparable situations. This model is novel compared with earlier treatments of the subject, as it allows ot (how quickly we believe something) to vary in a context-dependent manner.

Risk Assessment

High potential costs of ignorance (type II error) should lead to quick learning (high accepted p-values), while high costs of superstition should lead to a critical, conservative attitude. Note that, throughout this paper, we refer to statistical con- servatism (Killeen 1978) and not to political conservatism (see Boshier 1973). In our example of learning that the consumption of a certain food item causes stomach pain, the risk of poisoning has to be traded against the risk of starvation. If alterna-

38 Human Nature / Spring 2007

Figure 1. A general model of learning from the observation of coincidence. White arrows indicate the temporal sequence within the decision process (Steps 1-3) and black arrows show direct and indirect influences on cr calibration. The strength of a coinci- dence is measured in terms of a p-value (i.e., the probability that a random process will produce patterns as strong as or stronger than the one observed) which is compared with the threshold a. Worldviews depend on individual experiences, and these include both weak and strong causal beliefs. Strong causal beliefs (largely restricted to humans) require an understanding of underlying mechanisms. The model allows, in combination with cultural transmission, for a dynamic, ever-improving worldview over the course of history.

Step 1: Observation of coincidence

Costs of type I vs.

type I1 errors

Step 2: Calibration of

Fit with worldview (weak and strong

causal beliefs)

Step 3: Coincidence classified as

random (ifp > =) or meaningful (ifp ~ a)

worldview

tive food sources are abundant (low risk of starvation), and stomach pain was very strong (high risk of physical harm) when the unknown food item was first eaten, we should not give it a second try. It seems to be the best solution to take the risk of superstition (i.e., believing that this kind of food causes pain when it really might not). If, on the other hand, alternative food is scarce (increased risk of starvation), and the stomach pain following its initial consumption was only mild (low risk of physical harm), we should try once again before accepting the hypothesis that this kind of food leads to stomach pain. How strong the correlation of such observations has to be before we accept its existence (setting a threshold for the accepted prob- ability of error) apparently depends on the risks of error. Experimental learning trials with humans support this idea (Rudski et al. 1999, and studies discussed therein), even though experimental reinforcers (reward and punishment) in studies on humans can hardly be of the same magnitude as those in real life (Anderson 1990). Thus, more dramatic effects than those discussed by Rudski et al. (1999) might have been found if stakes were higher. For our model it matters little whether

Superstition, Belief, and Learning 39

the reaction norms of setting ct depending on the perceived risk relations are geneti- cally inherited or shaped by tradition and individual experience. An example for inborn components of the setting of a in humans may be the fear of snakes. We are quicker to learn that snakes are dangerous than to accept that they are not danger- ous (Mineka and Cook 1988; Wilson 1998:78).

Fit with Worldview

The second factor influencing the calibration of t~ is a "prejudice" based on earlier experiences in comparable situations. A similar approach is realized in Baye- sian statistics, which is concerned with how a person or animal should update an existing belief when presented with additional evidence (e.g., Anderson 1990; Malakoff 1999; Valone 2006). In everyday life, a minimum of information (e.g., having heard of something) may be used to judge a given situation (Boyer 1994). We call the entity of such information the worldview of an organism. In many cases the most important factor for whether or not we are ready to believe that an ob- served pattern is real is how likely we consider the existence of such a pattern (see also Glymoure and Cheng 1998). This assumption of likelihood is based on earlier experiences in similar situations. An animal that has previously learned that press- ing a switch allows it to open a door will probably learn more quickly that another switch allows it to control the light than would a naive individual (see Catania and Cutts 1963 for a similar record in humans). This prior information can be indi- vidual experience, but in humans it is also highly dependent on cultural tradition (the experiences of others). The worldview of most animals is largely based on experiences that can be gathered during an individual life span. In contrast to this, the worldview of humans (as well as some other species with cultural traditions) may develop over much longer times than this. Cultural tradition of knowledge allows individual humans to gather much more information about the world than could be learned from own experiences.

Up to this point, the proposed "test for consistency with our worldview" does not require any ability toward "strong" causal thinking (or deductive reasoning). Fol- lowing Wolpert (2003), we speak of weak causality if it is inferred from coinci- dence without an understanding of the underlying mechanism. Strong causality, in contrast, involves having a concept of the underlying mechanism. Animals (includ- ing humans) can make use of regularities without ever wondering why they occur. Humans, however, have evolved a unique ability to understand causal relationships. This understanding is also used in setting the threshold in our model. An observed regularity is more easily accepted as real if we can think of a plausible mechanism that could have caused it (Fugelsang and Dunbar 2004). As part of our worldview we collect experiences regarding the power of subjects to cause effects, and we then use this experience to judge the plausibility of a hypothetical causal mechanism. Since many underlying mechanisms cannot be observed directly, the power that we ascribe to subjects often remains hypothetical, and this is what we call a belief We

40 Human Nature / Spring 2007

believe that A has the power to cause B. Such beliefs make up a substantial part of our worldview.

Recent experiments that use neuroimaging techniques provide fascinating in- sights that seem to support our model. Fugelsang and Dunbar (2004) show how our brain responds differentially to evidence that is either consistent or inconsistent with our beliefs. When humans are presented with evidence consistent with their beliefs, brain regions get activated that function in learning and memory formation. Inconsistent evidence, however, activates brain regions that are associated with er- ror detection and conflict resolution. These experiments demonstrate why we more easily accept information that fits in our worldview and why we tend to reject ex- planations that seem implausible to us. We set high thresholds (i.e., low o 0 for explanations that are inconsistent with our beliefs, such that we need a lot of evi- dence to be convinced of something that does not fit into our worldview. This Baye- sian approach again helps us to select the most likely causal mechanism among the many potentially possible mechanisms. The advantage of such a system lies in the avoidance of wrong decisions, given a sufficiently good fit of the worldview with reality. In fact, this situation is the criterion for a worldview's quality (in an evolu- tionary, not necessarily ethical, sense). If we have to accept correlates that do not fit the woddview despite low t~ (low type I error), doubts should arise regarding the convergence of our worldview with reality. Eventually it will be supplanted by compet- ing views with less such misfits. Therefore, worldviews should improve over time in human history in the sense that we make fewer and fewer unexpected discoveries (see also Popper and Eccles 1977:149). Both ignorance and superstition would diminish.

In humans the worldview influence on t~ (right side in Figure 1) is dominant in most cases. Risk management regarding the potential costs of type I vs. type II errors (left side in Figure 1) should only govern the setting of t~ if the worldview offers no clear predictions on a situation. In modern Western societies this results in a high incidence of superstitious ideas on topics like gambling, sports, or questions of personal fate. Here costs of performing superstitiously motivated behaviors are typically low (such as avoiding the number 13) while the potential benefits of de- tecting a behavior that leads to success would be large. Even for an atheist it costs little more than pride to assume that praying helps when the airplane is about to crash, but the potential benefit of survival is considerable. These are situations in which our scientific worldview offers no prediction on the individual case.

Medicine is an example that illustrates the improvement of the worldview over time as proposed by our model. We might assume that superstition as well as igno- rance (type I and type II errors) concerning matters of disease and healing were considerably more frequent in the past. With the rapid improvements in the field of medicine in the past two centuries much superstition was overcome, although some cultural institutions such as Christian Science have not. In most modern societies health-related superstition is still common in those areas where the worldview of modern medicine still has relatively little to offer (e.g., some forms of cancer or psychosomatic syndromes).

Superstition, Belief, and Learning 41

As Scheibe and Sorbin (1965) point out, superstitious ideas might also be ac- quired by verbal communication rather than by experience. Although this puts some types of superstition (avoiding the number 13, and the like) in a context of cultural transmission, which is additionally influenced by cognitive and social constraints (Boyer 1994), the statistical explanation outlined above remains valid: If o~ is high (i.e., non-conservative), hearing of something, being taught something, or observ- ing someone else might serve as sufficient experience for the generation of a super- stitious belief(see Higgins et al. 1989 for experimental support). Superstitious ideas can be self-maintaining owing to a positive psychological feedback until a better solution for a problem is found (Scheibe and Sorbin 1965).

IS SUPERSTITION ADAPTIVE?

The phenomenon of superstition is intrinsically linked to learning from the obser- vation of coincidence. Any organism capable of this type of learning inevitably faces the risk of being superstitious. Avoiding superstition by setting a very conser- vative level of t~ carries the disadvantage that only very reliable relationships can be uncovered. Because the amount of information we can gather is restricted (lim- ited sample size), conservative levels of t~ result in slow learning. One of the main reasons for the great evolutionary success of the human species was probably our ability to learn fast and to uncover even complicated relationships. Apparently, hu- mans were strongly selected to search for and recognize patterns of regularity. This is suggested by the fact that we search for patterns even in situations where they certainly do not exist (e.g., patterns of good and bad luck in gambling). Superstition itself is not adaptive, but superstition seems an inevitable by-product of the ability to learn, and the latter is what has been favored by natural selection.

Dawkins (1998) suggested that the prevalence of superstition even in modern Westem societies is a consequence of high accepted p-levels, which were adjusted to an archaic human environment where population size, and consequently access to experience, was much smaller than today. Limited access to experience on a certain topic might lead to a tendency toward high levels of o~ and hence to a high frequency of superstition. In principle this is a valid argument, but it is necessarily based on inherited levels of o~ in humans. Or, to be more precise, Dawkins's sce- nario requires that the cognitive algorithms dedicated to reasoning about correla- tion and causation are so inflexible (i.e., they do not respond to changes in information density) that they lead to levels of tx that are no longer adaptive in the modern, information-rich environment.

In light of our model, however, it does not seem necessary to invoke an evolu- tionary lag. The relatively uncritical levels of ~ employed by most members of modern society are not necessarily "wrong" in comparison with the more conserva- tive habits of scientists. Rather, the relationship of potential costs (of type I vs. type II errors) may differ depending on the situation. The fact that non-scientists tend to believe in causal relationships as soon as an incident occurs twice in a row, whereas

42 Human Nature / Spring 2007

scientists usually doubt results based onp > 0.05, is only understandable when the costs of type I vs. type II errors are considered. If you experience sickness shortly after an unusual experience (e.g., a new food item, a vaccination, or walking at high elevation) you would be better off avoiding that situation in the future and thus taking the risk of behaving superstitiously. If, on the other hand, you intend to pub- lish the conclusion of a causal connection based on nothing but this single case in a scientific magazine, you might well ruin your career as a scientist--the costs of superstition would be high. The fact that young scientists are able to learn scientific rigor shows that the human mind is sufficiently flexible to adjust p-levels in a con- text-specific manner. It seems likely that individuals assess the optimal p-levels on a case-by-case basis in reaction to the perceived risk of type I and type II errors. The optimal learning strategy for a given type of situation can be learned (which is why we can be trained to improve our score on intelligence tests). We propose that people set their values o f~ close to an optimum (even if this allows for manifold supersti- tions), acknowledging that not truth but fitness is the ultimate goal of a learning process (see also Anderson 1990).

THE EVOLUTION OF CAUSAL THINKING AND BELIEFS

As indicated above, humans have evolved a unique ability for strong causal think- ing (in addition to weak causal thinking, which is still very frequent in humans; see Gigerenzer et al. 1999). Strong causal thinking ranges widely from understanding physical forces that act on objects with given mechanical properties to understand- ing the intentions of other individuals, both of which allow us to predict what is going to happen. Strong causal beliefs are probably largely restricted to humans (Kummer 1995; Tomasello 1999; Wolpert 2003), but the origins can be seen in some animals with highly developed brains. Whereas animals do not understand the world (or do to a much lesser extent) in terms of causes or intentions (Wolpert 2003), humans have evolved a unique drive to explain the world we live in. This drive has enabled humans to understand mechanical relationships between inani- mate objects and hence to excel in tool making (Wolpert 2003), as well as to under- stand the intentions of other individuals, resulting in superior competitive strategies. Causal thinking has enabled us to understand and manipulate our environment. Inventing new hunting strategies like fixing bait to a sharp hook on a fishing rod requires both a concept of mechanics and an understanding of the intentions of one's prey. These examples illustrate the great evolutionary advantage that has been brought about by the human drive to understand the reasons behind what we ob- serve or experience.

We argue that humans have been selected to interpret the surrounding world in terms of causality and intentionality because this has given them superior competi- tive abilities. Yet seeing the world in terms of cause-and-effect relationships implies that assumptions about causal mechanisms have to be made, as the latter often cannot be observed directly. In order to close gaps in a mechanistic chain of events

Superstition, Belief, and Learning 43

it is often necessary to create other, thus far unproven constructs (i.e. beliefs) that allow for a causal link between observations. Beliefs "are attempts to explain to ourselves theoretically the world we live in" (Popper and Eccles 1977:158; see also Camus 1942). They help to create consistency on the fringes of a deficient worldview by constructing reasons for otherwise inexplicable observations. Remarkably, not only religion, but also science relies to a large extent on such constructs (Popper and Eccles 1977:123). From the viewpoint of a human lacking most of the scien- tific knowledge of today, the assumption of an Almighty God provides a very pow- erful explanation for phenomena which otherwise remain inexplicable. Interestingly, gods are typically regarded as acting with intention, and many religious acts aim at influencing these intentions (appeasing the gods). This is consistent with the idea that the development of human social intelligence (the understanding of intention- ality) was fundamental to the origin of religious beliefs. Finally, in science as well as in individual learning, proof for the truth of these helpful constructs is furnished subsequently. The manifold attempts in Christian tradition to prove the existence of God might be seen in this context.

We suggest that natural selection has favored individuals who apply the follow- ing learning strategy: (1) search for patterns of regularity, (2) try to understand the underlying causal mechanism, (3) if necessary, make additional assumptions that help to explain the observed patterns mechanistically, and (4) try to test whether these additional assumptions hold in other circumstances. We further suggest that this learning strategy would yield greater abilities to distinguish between real pat- terns and randomness than other options. As a consequence of rule 1 we count incidences of superstition, whereas the belief in unproven assumptions (such as the existence of God), combined with the pursuit to prove them, may follow as conse- quences of learning rules 2 to 4. It seems that the creation of beliefs is a necessary by-product of strong causal thinking, just as the occurrence of superstition is a necessary by-product of the ability to learn from the observation of coincidence.

CONCLUSIONS

Our model explains the existence of superstition and belief in human behavior from interplay of costs of superstition and ignorance on the one hand and a worldview based on individual experience, cultural transmission, and genetically fixed biases on the other hand. The latter is dynamic--in other words, it is influenced by the results of learning processes. We argue that human decision-making is probably performed optimally with respect to levels ofc~, and it is not necessary to invoke the existence of an evolutionary lag in order to explain the frequent occurrence of superstition.

The following predictions could possibly be tested: Worldviews should (on aver- age) be supplanted by competing ones that allow for fewer unexpected discoveries. The calibration of a by means of a consistency test with the worldview should therefore lead to both less ignorance and less superstition (type I and type II errors)

44 Human Nature / Spring 2007

over the course o f human history. The model also predicts that it is still advanta- geous to take the risk o f superstition where the worldview makes no predictions and where the cost-benefit relation is favorable (i.e., low costs for superstition, high benefit i f a meaningful pattern is detected). Hence, we would predict that patterns with major relevance for fitness, like cues that indicate the presence o f a predator, are learned more quickly and believed in more strongly than less-fitness-relevant patterns.

With the ideas outlined above we hope to demonstrate that superstition and be- l ief can be treated as an interesting subject o f science. Moreover, scientific ap- proaches rely on the same mechanisms as present in everyday life and therefore necessarily also include superstitious conclusions as well as belief in unproven as- sumptions. An empirical, scientific treatment o f the matter might be a further step to closing the gap between science and the humanities as demanded by E. O. Wil- son (1998).

Our arguments were shaped by many discussions with friends and colleagues over the years. We are grateful to James Dale, Larry Fiddick, Konrad Fiedler, Ulmar Grafe, Dean Hashmi, Martin Heisenberg, Klaus Horstmann, Bart Kempenaers, and Wolfgang Wickler for their critical comments on earlier versions of the manuscript.

Jan Beck received his PhD at WiJrzburg University, Germany, and currently holds an assistant posi- tion at the University of Basel. He is interested in patterns and mechanisms of biodiversity, biogeog- raphy, and community ecology. His recent work is mainly focused on the macroecology, biodiversity, and behavior of Lepidoptera from the Indo-Australian tropics.

Wolfgang Forstmeier received his PhD at Wiirzburg University, Germany, and currently holds a posi- tion as junior research group leader at the Max Planck Institute for Omithology, Seewiesen, Germany. His main interest is the evolutionary significance of individuality. Using zebra finches as a model species he studies the evolutionary genetics of individual differences in mating behavior.

R E F E R E N C E S

Anderson, J. R. 1990 The Adaptive Character of Thought. Hillsdale, N J: Lawrence Erlbaum Associates.

Boshier, R. 1973 An Empirical Investigation of the Relationship between Conservatism and Superstition.

British Journal of Social and Clinical Psychology 12:262-267. Boyer, E

1994 The Naturalness of Religious Ideas: A Cognitive Theory about Religion. Berkeley: Univer- sity of California Press.

Camus, A. 1942 Le mythe de Sisyphe: essai sur l'absurde. Paris: Gallimard.

Catania, A. C., and D. Cutts 1963 Experimental Control of Superstitious Responding in Humans. Journal of the Experimental

Analysis of Behaviour 6:203-208. Cheng, P. W.

1997 From Covariation to Causation: A Causal Power Theory. Psychological Review 104:367- 405.

Dawkins, R. 1998 Unweaving the Rainbow. Boston: Mariner Books.

Superstition, Belief, and Learning 45

Frazer, J. G. 1922 The Golden Bough: A Study in Magic and Religion. New York: Macmillan.

Fugelsang, J. A., and K. N. Dunbar 2004 A Cognitive Neuroscience Framework for Understanding Causal Reasoning and the Law.

Philosophical Transactions of the Royal Society of London B 359:1749-1754. Garcia, J., and R. A. Koelling

1966 Relation of Cue to Consequence in Avoidance Learning. Psychonomic Science 4:123- 124.

Gigerenzer, G., and P. M. Todd 1999 Simple Heuristics That Make Us Smart. New York: Oxford University Press.

Glymoure, C., and P. W. Cheng 1998 Causal Mechanism and Probablitity: A Normative Approach. In Rational Models of Cogni-

tion, M. Oaksford and N. Chater, eds. Pp. 295-313. Oxford: Oxford University Press. Green, D. M., and J. A. Swets

1974 Signal Detection Theory and Psychophysics. Huntington, NY: Robert E. Krieger. Higgins, S. T., E. K. Morris, and L. M. Johnson

1989 Social Transmission of Superstitious Behavior in Preschool Children. Psychological Record 39:307-323.

Killeen, E R. 1978 Superstition: A Matter of Bias, Not Detectability. Science 199:88-90.

Kummer, H. 1995 Causal Knowledge in Animals. In Causal Cognition, D. Sperber, D. Premack, and A. J.

Premack, eds. Pp. 26-37. Oxford: Clarendon. Malakoff, D.

1999 Bayes Offers a "New" Way to Make Sense of Numbers. Science 286:1460-1464. Matsuzawa, T., Y. Hasegawa, S. Gotoh, and K. Wada

1983 One-Trial Long-Lasting Food-Aversion Learning in Wild Japanese Monkeys (Macaca fuscata). Behavioral and Neural Biology 39:155-159.

McGonigle, B., and M. Chalmers 1998 Rationality as Optimized Cognitive Self-regulation. In Rational Models of Cognition, M.

Oaksford and N. Chater, eds. Pp. 501-534. Oxford: Oxford University Press. Mineka, S., and M. Cook

1988 Social Learning and the Acquisition of Snake Fear in Monkeys. In Social Learning: Psy- chological and BiologicaI Perspectives, T. R. Zentall and B. G. Galef Jr., eds. Pp. 51-73. Hillsdale, N J: Lawrence Erlbaum Associates.

Morse, W. H., and B. F. Skinner 1957 A Second Type of Superstition in the Pigeon.American Journal of Psychology 70:308-311.

Ninness, H. A. C., and S. K. Ninness 1999 Contingencies of Superstition: Self-generated Rules and Responding during Second-Order

Response-Independent Schedules. Psych ologieal Record 49:211-243. Ono, K.

1987 Superstitious Behavior in Humans. Journal of the Experimental Analysis of Behaviour 47:261-271.

Popper, K., and J. Eccles 1977 The Self and Its Brain~n Argument for Interactionism. Berlin: Springer.

Rudski, J. M., M. I. Lischner, and L. M. Albert 1999 Superstitious Rule Generation Is Affected by Probability and Type of Outcome. Psychologi-

cal Record 49:245-260. Scheibe, K. E., and T. R, Sorbin

1965 Towards a Theoretical Conceptualization of Superstition. British Journal for the Philoso- phy of Science 16:143-158.

Skinner, B. F. 1948 "Superstition" in the Pigeon. Journal of Experimental Psychology 38:168-172.

Tomasello, M. 1999 The Cultural Origins of Human Cognition. Cambridge, MA: Harvard University Press.

46 Human Nature / Spring 2007

Valone, T. J. 2006 Are Animals Capable of Bayesian Updating? An Empirical Review. Oikos 112:252-259.

Vyse, S. A. 1997 Believing in Magic: The Psychology of Superstition. New York: Oxford University Press.

Wagner, G. A., and E. K. Morris 1987 "Superstitious" Behavior in Children. Psychological Record 37:471--488.

Wilson, E. O. 1998 Consilience: The Unity of Knowledge. New York: Alfred A. Knopf.

Wolpert, L. 2003 Causal Belief and the Origins of Technology. Philosophical Transactions of the Royal Soci-

ety of London A 361:1709-1719.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.