faith in intuition and behavioral biases

11
Journal of Economic Behavior & Organization 84 (2012) 182–192 Contents lists available at SciVerse ScienceDirect Journal of Economic Behavior & Organization j our nal ho me p age: www.elsevier.com/locate/jebo Faith in intuition and behavioral biases Carlos Alós-Ferrer a,, Sabine Hügelschäfer b a Department of Economics, University of Konstanz. Box 150, D 78459 Konstanz, Germany b Department of Psychology, University of Konstanz. Box 39, D 78459 Konstanz, Germany a r t i c l e i n f o Article history: Received 20 January 2012 Received in revised form 19 July 2012 Accepted 10 August 2012 Available online 20 August 2012 JEL classification: C91 D80 J24 Keywords: Behavioral biases Bayesian updating Intuition Representativeness Conservatism Reinforcement a b s t r a c t We use a 15-item self-report questionnaire known as “Faith in Intuition” to measure reliance on intuitive decision making, and ask whether the latter correlates with behav- ioral biases involving a failure of Bayesian updating. In a first experiment, we find that higher report scores are associated with an increased use of the representativeness heuristic (overweighting sample information). We find no evidence of increased conservatism (over- weighting prior information). The results of a second experiment show that more intuitive decision makers rely more often on the “reinforcement heuristic” where successful deci- sions are repeated even if correctly updating prior beliefs indicates otherwise. However, this effect depends on the magnitude of incentives. © 2012 Elsevier B.V. All rights reserved. 1. Introduction Does reliance on intuition impair effective decision making? Some people are more intuitive than others, 1 and hence the question arises whether reliance on intuition results in worse decisions. As a case in point, consider decisions under uncertainty. When confronted with uncertain outcomes, a rational decision maker will make use of all available information to update prior beliefs, which leads to an appropriate use of Bayes’ rule. There are, however, a number of well-documented phenomena leading to systematic violations of Bayes’ rule in conditional probability judgments, e.g. the representativeness heuristic (Kahneman and Tversky, 1972; Grether, 1980) and the more general phenomenon of base-rate neglect (Fiedler et al., 2000; Erev et al., 2008), the conjunction fallacy (Tversky and Kahneman, 1983; Zizzo et al., 2000), positive confirmation bias (Jones and Sugden, 2001), the reinforcement heuristic (Charness and Levin, 2005; Achtziger and Alós-Ferrer, 2010), and many others. Such systematic deviations from Bayesian updating lead to suboptimal behavior at the microeconomic level, with severe consequences for e.g. management and personal finance. Whether behavioral biases are selected against at a macroeconomic level remains an open question (see e.g. Camerer, 1987), but it has been argued (De Long et al., 1990; Barberis and Thaler, 2003) that limits to arbitrage prevent financial markets from fully eliminating them in the aggregate. A number of recent papers have addressed the question of individual heterogeneity in behavioral biases, including several concerning probability judgments. Oechssler et al. (2009) and Hoppe and Kusterer (2011) find that higher test scores in the Corresponding author. Tel.: +49 7531 88 2340; fax: +49 7531 88 4119. E-mail address: [email protected] (C. Alós-Ferrer). 1 Heterogeneity in reliance on intuition is a well-established fact in psychology. See e.g. Epstein et al. (1996) and Betsch and Kunz (2008). 0167-2681/$ see front matter © 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.jebo.2012.08.004

Upload: sabine

Post on 18-Dec-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Journal of Economic Behavior & Organization 84 (2012) 182– 192

Contents lists available at SciVerse ScienceDirect

Journal of Economic Behavior & Organization

j our nal ho me p age: www.elsev ier .com/ locate / jebo

Faith in intuition and behavioral biases

Carlos Alós-Ferrera,∗, Sabine Hügelschäferb

a Department of Economics, University of Konstanz. Box 150, D 78459 Konstanz, Germanyb Department of Psychology, University of Konstanz. Box 39, D 78459 Konstanz, Germany

a r t i c l e i n f o

Article history:Received 20 January 2012Received in revised form 19 July 2012Accepted 10 August 2012Available online 20 August 2012

JEL classification:C91D80J24

Keywords:Behavioral biasesBayesian updatingIntuitionRepresentativenessConservatismReinforcement

a b s t r a c t

We use a 15-item self-report questionnaire known as “Faith in Intuition” to measurereliance on intuitive decision making, and ask whether the latter correlates with behav-ioral biases involving a failure of Bayesian updating. In a first experiment, we find thathigher report scores are associated with an increased use of the representativeness heuristic(overweighting sample information). We find no evidence of increased conservatism (over-weighting prior information). The results of a second experiment show that more intuitivedecision makers rely more often on the “reinforcement heuristic” where successful deci-sions are repeated even if correctly updating prior beliefs indicates otherwise. However,this effect depends on the magnitude of incentives.

© 2012 Elsevier B.V. All rights reserved.

1. Introduction

Does reliance on intuition impair effective decision making? Some people are more intuitive than others,1 and hencethe question arises whether reliance on intuition results in worse decisions. As a case in point, consider decisions underuncertainty. When confronted with uncertain outcomes, a rational decision maker will make use of all available informationto update prior beliefs, which leads to an appropriate use of Bayes’ rule. There are, however, a number of well-documentedphenomena leading to systematic violations of Bayes’ rule in conditional probability judgments, e.g. the representativenessheuristic (Kahneman and Tversky, 1972; Grether, 1980) and the more general phenomenon of base-rate neglect (Fiedleret al., 2000; Erev et al., 2008), the conjunction fallacy (Tversky and Kahneman, 1983; Zizzo et al., 2000), positive confirmationbias (Jones and Sugden, 2001), the reinforcement heuristic (Charness and Levin, 2005; Achtziger and Alós-Ferrer, 2010), andmany others. Such systematic deviations from Bayesian updating lead to suboptimal behavior at the microeconomic level,with severe consequences for e.g. management and personal finance. Whether behavioral biases are selected against ata macroeconomic level remains an open question (see e.g. Camerer, 1987), but it has been argued (De Long et al., 1990;

Barberis and Thaler, 2003) that limits to arbitrage prevent financial markets from fully eliminating them in the aggregate.

A number of recent papers have addressed the question of individual heterogeneity in behavioral biases, including severalconcerning probability judgments. Oechssler et al. (2009) and Hoppe and Kusterer (2011) find that higher test scores in the

∗ Corresponding author. Tel.: +49 7531 88 2340; fax: +49 7531 88 4119.E-mail address: [email protected] (C. Alós-Ferrer).

1 Heterogeneity in reliance on intuition is a well-established fact in psychology. See e.g. Epstein et al. (1996) and Betsch and Kunz (2008).

0167-2681/$ – see front matter © 2012 Elsevier B.V. All rights reserved.http://dx.doi.org/10.1016/j.jebo.2012.08.004

titm

mb(DDpdETf1

Iimiustehata

tsucstA

2

wnFaiiE

i

2

fdfu

1

C. Alós-Ferrer, S. Hügelschäfer / Journal of Economic Behavior & Organization 84 (2012) 182– 192 183

hree-item Cognitive Reflection Test (CRT; Frederick, 2005), a predictor of cognitive abilities, are correlated with lowerncidences of certain biases, including the conjunction fallacy. Toplak et al. (2011) argue that low CRT scores indicate aendency to behave as a “cognitive miser” (e.g. Tversky and Kahneman, 1974), giving the first response which comes to

ind. In other words, such decision makers act on impulse.Following one’s impulses or “what feels right” is often identified with intuitive behavior. In this paper we explore a

ore general measure of intuitive behavior and consider experimental tests relating this measure to specific behavioraliases associated with failures of Bayesian updating. This measure is a questionnaire-based scale called Faith in IntuitionFI), originally developed by Epstein et al. (1996). It is based on the dual-process literature from psychology (e.g. Strack andeutsch, 2004; see Evans, 2008 for a review), which also includes examples in economics (e.g. Thaler and Shefrin, 1981).ual-process models of decision making distinguish an impulsive/experiential system from a reflective/deliberative one. Aarticular example is the Cognitive-Experiential Self-Theory (Epstein, 1994; Epstein and Pacini, 1999), according to whichecision makers are influenced by two “thinking styles” corresponding to the two systems. The first one, called “rational” bypstein (1994), operates in a conscious, deliberative, analytic, effortful, and slow way, requiring logical justification of beliefs.he second, “experiential” one, captures the idea of intuitive behavior: it is automatic, unconscious, holistic, effortless, andast, and operates on the basis of beliefs derived from emotional experiences (“experiencing is believing”; Epstein et al.,996, p. 391).

The Faith in Intuition scale reflects an individual’s trust in his or her own intuition, i.e. reliance on the experiential system.t includes items such as “I believe in trusting my hunches”, “I am quick to form impressions about people”, and “The firstdea is often the best one.” (see Appendix A for the complete questionnaire). The FI scale is one of the most widely used

easures of individual differences in the tendency to rely on intuitive information processing. Since its original introduction,t has undergone several changes (e.g. Epstein and Pacini, 1999; Norris and Epstein, 2011) and various versions have beensed in the literature. We rely on a 15-item German version of the FI scale developed by Keller et al. (2000), which has beenhown to have good item characteristics, high internal consistency, and high construct validity. Several studies have foundhat high scores in FI are associated with a reliance on heuristic reasoning (Epstein et al., 1996; Klaczynski et al., 1997; Shiloht al., 2002). For instance, Danziger et al. (2006) found that a high score in FI was associated with increased use of a simpleeuristic, namely using the subjective ease with which certain information comes to mind as a basis for judgment. Toyosawand Karasawa (2004) found that a high score in FI was associated with a higher likelihood to give nonoptimal responses inhe classical conjunction-fallacy “Linda problem” (Tversky and Kahneman, 1983). Recently, Mahoney et al. (2011) discovered

positive relationship between FI and framing effects.We hypothesized that interindividual differences in the reliance on intuitive decision making could at least partly explain

he large heterogeneity in individual behavior observed in studies of probability updating. Specifically, we expected that deci-ion makers with a more experiential thinking style, as measured by FI, would make more decision errors when Bayesianpdating conflicts with a well-defined, simple, “intuitive” heuristic. We conducted two experiments. In the first one, weonsidered a setting where both overweighting prior information (conservatism) and underweighting it following the repre-entativeness heuristic are possible (Grether, 1980, 1992). In the second one, we considered a paradigm where reinforcementhrough payoffs might conflict with the optimal decisions prescribed by Bayesian updating (Charness and Levin, 2005;chtziger and Alós-Ferrer, 2010).

. Experiment I: representativeness and conservatism

A decision maker who deviates from Bayes’ rule when updating his or her beliefs might make two kinds of mistakes: over-eighting the prior (conservatism) and overweighting new information. If the decision maker systematically overweightsew information, i.e. fails to properly account for prior information, he or she is said to exhibit base-rate neglect (Fiedler, 2000;iedler et al., 2000; Erev et al., 2008). A classical example of base-rate neglect is the representativeness heuristic (Kahnemannd Tversky, 1972; Grether, 1980, 1992), which confounds the probability of an event with its similarity to a population andn which base rates are largely ignored. On the opposite extreme, a decision maker who systematically overweights priornformation at the expense of new sample information is said to exhibit conservatism (Edwards, 1982; Erev et al., 1994;l-Gamal and Grether, 1995).

Our first experiment relied on a paradigm related to Grether (1980, 1992) in order to test whether FI correlates with anncreased reliance on two heuristics: conservatism and the representativeness heuristic.

.1. Experimental design

The essence of the experiment was as follows. Participants were informed that N balls would be drawn with replacementrom one of two urns, A and B. The two urns were known to contain N black and white balls each, with fixed compositions,ifferent among urns. The urn from which the balls were extracted was randomly determined with a fixed probability of p

or urn A. The participant was shown how many black balls, m, had been extracted, and asked to guess which urn had beensed. The procedure was repeated a large number of times (300 in our design), varying the prior p across rounds.

We conducted two experimental treatments which varied in the size of the urns, Treatment 1 with N = 6 (as in Grether,980, 1992), and Treatment 2 with N = 4. In Treatment 1, urn A contained four black balls and two white ones, while urn B

184 C. Alós-Ferrer, S. Hügelschäfer / Journal of Economic Behavior & Organization 84 (2012) 182– 192

Table 1Prescriptions of the different heuristics in Experiment I. The upper table corresponds to Treatment 1 (6 balls design), the lower one to Treatment 2 (4 ballsdesign). k determines the prior for urn A (p = k/N where N is the number of balls), m is the observed number of black balls. For each (k, m) combination,the upper-left-hand entry gives the posterior odds for urn A and the remaining three entries are the prescriptions for Bayesian updating (upper-right-hand), conservatism (lower-left-hand), and the representativeness heuristic (lower-right-hand). Light- and dark-shadowed entries indicate where Bayesianupdating conflicts with the representativeness heuristic and conservatism, respectively.

contained three balls of each color. In Treatment 2, urn A contained three black balls and one white, while urn B containedtwo balls of each color.2

In each round, first the prior probabilities of the two urns were presented on the computer screen. Urn A was selectedwith probability k/6 where k varied from 2 to 4 (Treatment 1), respectively with probability k/4 where k varied randomlyfrom 1 to 3 (Treatment 2). The participant was informed of k but not of the urn actually used. The parameter k was generatedrandomly to produce randomized priors p of 1/3, 1/2, and 2/3 (Treatment 1), respectively 1/4, 1/2, and 3/4 (Treatment 2) forurn A.

Subsequently N balls were randomly drawn by the computer out of the chosen urn (with replacement) and presentedsequentially. Participants were then asked to guess which urn had been used by pressing a predetermined key for each urn.No feedback was given about the correctness of the choice.

Depending on the prior (k) and the number m of extracted black balls (which varied from 0 to N), the participantswere endogenously presented with 21 (respectively 15) different possible decision situations in Treatment 1 (respectivelyTreatment 2). The prescriptions of Bayesian updating, the representativeness heuristic, and conservatism are given in Table 1.Two decision rules are aligned in a (k, m) situation if they prescribe the same option; they are in conflict if they prescribedifferent ones. Further, it is possible that a decision rule does not make a prescription for a given situation. Situationswithout a prescription are called neutral for the silent decision rule. That is, our prediction is that conservatism and therepresentativeness heuristic are only active for certain situations. In the case of conservatism, it should be clear that it only

applies if k = 2 or k = 4 (Treatment 1), respectively if k = 1 or k = 3 (Treatment 2); else the prior is 50–50 and delivers notendency towards either answer. For the representativeness heuristic, we adopt the strict view that a prescription is derived

2 We expected no differences between the treatments, which can be taken as a robustness test. We had planned a later experiment on representativenessin the EEG (Achtziger et al., 2012b). EEG paradigms need to minimize stimulus complexity, and hence we needed to make sure that representativenesseffects, usually observed in 6-ball implementations following Grether (1980, 1992), were readily observed in our 4-ball variant.

oT

tlioda

phc

ae

2

s9

(pPbi

2

fBTNmc

i2

lrv

ado

sta1i

e

bb

C. Alós-Ferrer, S. Hügelschäfer / Journal of Economic Behavior & Organization 84 (2012) 182– 192 185

nly if the sample looks like one of the urns. This corresponds to m = 3 and m = 4 in Treatment 1, and to m = 2 and m = 3 inreatment 2.3

Table 1 also reports the odds in favor of urn A, i.e. the quotient between the probabilities for urn A and B conditional onhe observation of m black balls, given that the prior was k/N. Bayesian updating prescribes to choose A or B if these odds arearger or smaller than one, respectively. The paradigm provides an inverse measure of the difficulty of each (k, m) situation,n the form of the odds in favor of the most likely urn after observation of the sample, i.e. the odds for A if they are larger thanne and their inverse otherwise. For example, in Treatment 2, the three situations with odds (for A) 1.69 have a comparableifficulty to the three situations with odds 0.56 (since this is equivalent to odds of 1.79 � 1/0.56 for B), while other situationsre simpler.

Since we focus on individual differences, we aimed at obtaining panel data with a large number of observations perarticipant. There were 300 rounds divided in three equal parts, with a break of one minute between parts. After the taskad been completed, one trial from each part was selected randomly by the computer. The participant received D10 for everyorrect decision and D4 for every wrong decision in these three trials.

At the end of the computer experiment, participants filled in a questionnaire which comprised potential control variablesnd demographic information. Our main interest was in Faith in Intuition (Epstein et al., 1996) which was measured asxplained in Appendix A.4

.2. Procedural details

39 participants (16 male, 23 female), ranging in age from 19 to 30 years (M = 23.5, SD = 2.46), were recruited from thetudent population at the University of Konstanz, excluding students majoring in economics.5 The experiment took around0 min. Average earnings were D24.46 per participant (SD = 5.04).

Participants were randomly assigned to one of the two experimental treatments and one of two counterbalance conditionsthat is, the colors black and white were reversed for half of the participants). Data collection proceeded individually, i.e. eacharticipant was cited, instructed, and participated in the experiment individually; there were no other participants present.articipants received detailed written instructions explaining the paradigm, and all clarifying questions were answeredy the experimenter. Afterwards, the experimenter left the room for the duration of the experiment. A complete set of

nstructions is available upon request.

.3. Results

Faith in Intuition varied from 2.39 to 8.11 in the sample, with an average of 5.35. Table 2 presents aggregate errorrequencies conditional on every decision situation (k, m). Error frequencies are systematically larger in situations whereayesian updating conflicts either with the representativeness heuristic or with conservatism. Further, a comparison withable 1 shows that error rates are higher when the posterior odds are closer to one, i.e. when the decision is more difficult.ote also that the two situations with (k, m) = (3, 3) and (3, 4) in Treatment 1 are of comparable difficulty to those with (k,) = (4, 2), (4, 3), (2, 4), and (2, 5); however, error rates are lower for the first two situations, where there is no decision

onflict, than for the latter (conflict) situations. An analogous observation holds for Treatment 2.Our experimental data form a strongly balanced panel, with 300 observations per participant. Due to the non-

ndependence of observations within the participants, data were analyzed using random-effects probit regressions (Baltagi,005) on decisions (error vs. correct decision).

The results are presented in Table 3. There was no significant effect of the number of balls in the two urns (6 or 4) on theikelihood of an error. As a validation of the basic design, the counterbalance dummy also showed no significant effect. Errorates did not decrease over the course of the experiment, as was to be expected since there was no feedback (the “round”ariable was transformed to take values in the interval [0, 1]). Gender did not influence error rates.

In order to control for task difficulty, we included the “corrected odds”, defined as the odds in favor of the most likelylternative minus one, as a regressor. As expected, the probability of an error decreased with increasing corrected odds (i.e.ecisions were more accurate for simpler decisions). Marginal accuracy (measured as the square product of the correcteddds) decreased slightly in the corrected odds.

3 One could argue that representativeness also determines a choice if a sample “looks more” like an urn, e.g. it could prescribe urn A in Treatment 1 if theample contains more than 4 black balls. In contrast, we assume that the representativeness heuristic is only triggered in case of an exact match regardinghe number of black and white balls. This is consistent with Glöckner and Witteman’s (2010) concept of “matching intuition”, according to which intuitivenswers arise when objects or situations are automatically compared with exemplars, prototypes, or schemes stored in memory (e.g. Dougherty et al.,999). Matching intuition is highly sensitive to variations in the similarity of crucial features of the task, and should therefore only apply if an exact pattern

n the number of black and white balls is recognized.4 Questionnaires as Faith in Intuition are typically filled in after any other decision-making task, since, if they were filled in before such tasks, priming

ffects might arise.5 In both experiments, we excluded economics students because the economics curriculum at the time in Konstanz included a course on heuristics and

iases where paradigms similar to the ones used in the experiments were discussed. One participant out of the original 40 was excluded from data analysisecause, according to his statements in the final questionnaire, he simply pressed the keys at random during the task.

186 C. Alós-Ferrer, S. Hügelschäfer / Journal of Economic Behavior & Organization 84 (2012) 182– 192

Table 2Error frequencies in Experiment I. Total number of observations in brackets. The upper table corresponds to Treatment 1 (6 balls design), the lower one toTreatment 2 (4 balls design). k determines the prior for urn A (p = k/N where N is the number of balls), m is the observed number of black balls. Light- anddark-shadowed entries indicate where Bayesian updating conflicts with the representativeness heuristic and conservatism, respectively (see Table 1).

m

k 0 1 2 3 4 5 6

4 13.3% 26.7% 34.4% 30.4% 2.8% 1.0% 1.3%(15) (105) (317) (519) (643) (492) (151)

3 5.9% 4.5% 1.6% 9.8% 11.8% 3.1% 2.5%(17) (89) (250) (407) (416) (290) (81)

2 0% 2.4% 2.1% 3.5% 21.9% 37.3% 24.3%(32) (165) (377) (634) (602) (324) (74)

m

k 0 1 2 3 4

3 10.3% 11.8% 23.1% 1.9% 2.3%(39) (187) (546) (799) (562)

2 0% 1.9% 7.3% 10.7% 8.4%(40) (210) (440) (476) (263)

1 0% 0.5% 1.1% 15.0% 38.9%

(103) (428) (711) (634) (262)

The first regression model estimates basic behavioral biases. We find significant conservatism in the data, as capturedby the dummy indicating a conflict between Bayesian updating and conservatism (i.e. situations in which correct deci-sions are in opposition to base rates); the corresponding coefficient is significant and positive, i.e. errors were more likelyin these situations. Likewise, the dummy representing conflict with the representativeness heuristic was significant and

positive as predicted. This means the likelihood of an error was also significantly higher when the heuristic was opposedto Bayesian updating, compared to neutral situations. We also included dummies for alignment of Bayesian updating with

Table 3Random-effects probit regressions on decision errors (0 = correct choice, 1 = error) in Experiment I. Number of observations = 11700 (39 × 300). Numbersin brackets are standard errors. (*) = significant at the 10% level. (**) = significant at the 5% level. (***) = significant at the 1% level.

Variable Probit 1 Probit 2

Treatment (1 = 6 Balls) 0.134 0.151(0.165) (0.168)

Counterbalance (1 = Majority of Black Balls) −0.203 −0.269(0.165) (0.175)

Round(/300) 0.085 0.079(0.063) (0.064)

Gender (1 = Male) −0.130 −0.143(0.168) (0.168)

Corrected Odds −0.138*** −0.148***(0.019) (0.019)

Corrected Odds (squared) 0.002*** 0.003***(0.000) (0.000)

Conflict with Representativeness (1 = Yes) 0.828*** −0.178(0.090) (0.209)

Alignment with Representativeness (1 = Yes) 0.013 −0.003(0.083) (0.084)

Conflict with Conservatism (1 = Yes) 1.133*** 3.460***(0.090) (0.234)

Alignment with Conservatism (1 = Yes) 0.087 0.116(0.123) (0.124)

FI 0.093(0.077)

FI * Conflict with Representativeness 0.180***(0.036)

FI * Conflict with Conservatism −0.449***(0.042)

Constant −1.553*** −1.986***(0.189) (0.446)

log likelihood −3017.449 −2904.849Wald test 1061.36*** 1239.79***

rt

wob

hdrfm

2

hcmr

snipw(

pttaief

3

abFg

slw

1tu1

Kiiws

C. Alós-Ferrer, S. Hügelschäfer / Journal of Economic Behavior & Organization 84 (2012) 182– 192 187

epresentativeness (i.e. situations in which the representativeness heuristic leads to the correct choice) or conservatism, buthey did not affect the probability of an error.6

The second behavioral model incorporates Faith in Intuition. The basic prediction is that more intuitive decision makersill commit more errors when a heuristic is available but conflicts with Bayesian updating. Hence, we include the interactions

f Faith in Intuition with the two conflict dummies, which are both highly significant. FI is included as a separate regressorut it is not significant.

The effects are as follows. First, when controlling for Faith in Intuition, the dummy for conflict with the representativenesseuristic is not significant anymore, but the interaction term is significantly positive. This indicates that more intuitiveecision makers (as measured by FI) commit more errors which can be attributed to the behavioral bias encompassed in theepresentativeness heuristic. Second, concerning conservatism, the conflict dummy remains significant when controllingor Faith in Intuition, but, surprisingly, the interaction term is significant and negative, implying that more intuitive decision

akers commit less conservative errors.

.4. Discussion

Since FI corresponds to heuristic processing as described by Kahneman et al. (1982), we expected that participants withigh scores in Faith in Intuition would be more prone to errors in situations where either the representativeness heuristic oronservatism were opposed to Bayesian updating. The regression analyses confirmed that intuitive decision makers reliedore strongly on the representativeness heuristic, but, unexpectedly, a high level of FI reduced the likelihood of errors

esulting from conservatism.The last observation, however, is in agreement with the views of Keller et al. (2000), who argue that Faith in Intuition

hould not be equated with heuristic processing, since intuitive people seem to be susceptible to particular heuristic cues andot to others. A possible explanation for why more intuitive people rely on the representativeness heuristic (where sample

nformation is overweighted) but not on conservatism (where the prior is overweighted) might be that more intuitivearticipants are particularly sensitive towards information which they receive immediately before making a decision. Thisould be consistent with a dual-process approach, since the experiential system is oriented toward immediate action

Epstein et al., 1996).Our results may seem inconsistent with the findings reported by Hoppe and Kusterer (2011). These authors found that

articipants’ score on the Cognitive Reflection Test (Frederick, 2005) was negatively related to the use of the representa-iveness heuristic and conservatism. When comparing these results one has to keep in mind that the CRT is a three-itemest explicitly defined as a measure of cognitive ability, namely the ability to override a prepotent but incorrect responsend to engage in further reflection. In contrast, FI is a self-report scale which reflects engagement and trust in one’s ownntuition, thereby measuring cognitive style as opposed to ability. Moreover, it has been shown (Epstein et al., 1996; Kellert al., 2000) that FI is independent of the engagement in cognitive activity as measured by a different scale known as “Needor Cognition” (Cacioppo and Petty, 1982). Therefore, the CRT cannot be taken to be the exact opposite of FI.7

. Experiment II: reinforcement

Reinforcement is a basic feature of human learning, where the propensity to adopt an action increases when it leads to success and decreases when it leads to a failure (Thorndike, 1911; Sutton and Barto, 1998). Reinforcement learning haseen extensively studied in psychology and has also received a great deal of attention in microeconomics and game theory.or instance, Roth and Erev (1995) and Erev and Roth (1998) used reinforcement learning to study experimental data fromame-theoretic settings.

Indeed, oftentimes, repeating choices which have worked well in the past is a good idea. However, this is not universallyo. It is a simple matter to conceive of situations where the information conveyed by a successful decision should actually

ead to changing the decision afterwards. Our second experiment used a paradigm encompassing this possible conflict which

as introduced by Charness and Levin (2005) and was further studied in Achtziger and Alós-Ferrer (2010).

6 The conflict dummies take the value 1 for the situations marked as such in Table 1. The dummy for alignment with representativeness takes the value for (k, m) = (4, 4), (3, 3), (3, 4), and (2, 3) in Treatment 1, and (k, m) = (3, 3), (2, 2), (2, 3), and (1, 2) in Treatment 2. To avoid interactions, we definedhe dummy for alignment with conservatism excluding situations already captured by other dummies, i.e. the dummy takes the value 1 when Bayesianpdating and conservatism were aligned and representativeness was neutral. This corresponds to (k, m) = (4, 5), (4, 6), (2, 0), (2, 1), and (2, 2) for Treatment, and (k, m) = (3, 4), (1, 0), and (1, 1) for Treatment 2.7 Additionally, the design and implementation of the study by Hoppe and Kusterer (2011) differed considerably from the present study. In Hoppe andusterer (2011), the conservatism problem was presented to the participants as a word and number problem. In the present study, the task was computer-

mplemented and only priors were presented by words and numbers, whereas in our case the sample was graphically presented as black and white balls. Its conceivable that the sample information was less salient in Hoppe and Kusterer (2011) compared to our study. As those authors argue, decision makers

ith low CRT scores seem to focus heavily on salient information. The results of both studies might point out a relation between conservatism and thealience of sample information.

188 C. Alós-Ferrer, S. Hügelschäfer / Journal of Economic Behavior & Organization 84 (2012) 182– 192

Table 4Urn compositions depending on the state of the world in Experiment II. Black and white correspond to blue and green, respectively; participants were paidfor blue balls.

State (Prob) Left Urn Right Urn

Up (1/2) � � � � © © � � � � � �

Down (1/2) � � © © © © © © © © © ©

3.1. Experimental design

There were two urns, the left urn and the right urn, and both contained N = 6 balls, which could be blue or green. Theurns were presented on the computer screen, with masked colors for the balls. The experimental task consisted of choosingwhich urn a single ball should randomly be extracted from (with replacement) by pressing one of two keys on the keyboard.Participants were only paid for drawing balls of a pre-specified color, say blue. After the result of this first draw was revealed,they were asked to choose the left or right urn a second time, a new ball was randomly extracted, and they were paid againif the ball was of the appropriate color.

The distributions of balls in the two urns varied depending on the state of the world (Up or Down) which was not revealedto the participant. The (known) prior probability of both states was 1/2 and the state of the world was constant across the twodraws. This means that while the first draw was uninformed, the second one should be based on the posterior probabilityupdated from the prior after observing the color of the first ball. For the second draw, a fully rational decision maker willchoose the urn with the highest associated expected payoff, given the posterior probability of the state of the world asupdated through Bayes’ rule.

Charness and Levin (2005) documented a systematic deviation from Bayes’ rule which was later referred to as the rein-forcement heuristic by Achtziger and Alós-Ferrer (2010). A decision maker following this heuristic will base his or her behavioron a simple “win-stay, lose-shift” rule of thumb: stay with the same urn as in the first round if the choice was successful(blue ball), switch otherwise.

The composition of the urns is given in Table 4. The right urn always contained balls of one color only, different for eachstate of the world. Hence, choosing this urn revealed the state of the world. After starting with the right urn, the seconddecision was straightforward (stay with the right urn after drawing a blue ball, switch to the left urn otherwise; i.e. Bayesianupdating and the reinforcement heuristic were aligned). In contrast, the left urn contained balls of both colors in both states,therefore feedback did not fully reveal the present state of the world but could be used to update prior beliefs about this state(prior probabilities). By design, the composition of the urns was such that, if this urn was chosen first, the reinforcementheuristic and Bayesian updating always delivered opposite decisions (decision conflict). Last, a simple computation showsthat a Bayesian decision maker would maximize the joint expected payoff for the two draws by starting with the right urn,hence learning the state of the world for sure.

Each participant completed 60 rounds of the decision task. In order to guarantee a reasonable number of observationsfollowing left-urn choices (since a Bayesian decision maker would always start with a draw from the right urn if allowed todo so), and following Charness and Levin (2005) and Achtziger and Alós-Ferrer (2010), during the first 30 rounds the firstdraws were forced and alternated, i.e. they were required to be made from the left urn in every odd-numbered trial andfrom the right urn in every even-numbered trial. For the remaining 30 rounds, there were no constraints on first draws.

Bayesian updating tasks often show little or no effect of increased financial incentives, presumably due to ceiling effects(as mentioned e.g. by Camerer and Hogarth, 1999). In order to test for the effect of incentives in our framework, and possiblytheir interaction with an intuitive orientation in decision making, we included two experimental treatments which variedthe monetary incentives. In Treatment 1, participants earned 9 cents for every successful draw. In Treatment 2, this paymentwas increased to 18 cents.

Following the computer experiment, participants filled in a questionnaire which comprised the Faith in Intuition ques-tionnaire, demographic information, and other potential control variables.

3.2. Procedural details

Participants were 46 students (24 male, 22 female) from the University of Konstanz (excluding students majoring ineconomics), ranging in age from 19 to 29 years (M = 22.1, SD = 2.37).8 The experiment lasted around 90 minutes and averageearnings were D8.56 (SD = 3.08), in addition to a show-up fee of D5.

The experiment was part of a series of EEG measurements whose neuroscientific analysis is reported in Achtziger et al.

(2012a). Faith in Intuition was included in the final questionnaire in order to conduct the research which is reported here.Participants were randomly assigned to one of the two experimental treatments (magnitude of incentives) and one oftwo counterbalance conditions (that is, for half of the participants, the colors of blue and green balls were exchanged and

8 Two participants out of the original 48 had to be excluded from data analysis. One did not follow the instructions and thus did not properly accomplishthe task. The other exhibited an extremely poor understanding of the rules of the decision task (as assessed by the experimenter) and reported having avery low desire to perform well.

C. Alós-Ferrer, S. Hügelschäfer / Journal of Economic Behavior & Organization 84 (2012) 182– 192 189

Table 5Error frequencies in Experiment II. Total number of observations in brackets. First-draw errors are shown as percentage of the number of free choices.

2nd Draw: Conflict 2nd Draw: Alignment

After After After AfterIncentives 1st Draw Left-Win Left-Lose Right-Win Right-Lose

High 34.8% 65.1% 47.4% 3.9% 12.4%(660) (275) (285) (381) (379)

Low 26.9% 64.6% 56.6% 5.3% 10.2%(720) (280) (274) (435) (451)

Table 6Random-effects probit regressions on second-draw errors (0 = correct choice, 1 = error) in Experiment II. Number of observations = 2760 (46 × 60). Numbersin brackets are standard errors. (*) = significant at the 10% level. (**) = significant at the 5% level. (***) = significant at the 1% level.

Variable Probit 1 Probit 2

Treatment (1 = High Incentives) −0.088 −0.016(0.287) (1.611)

Counterbalance (1 = Payment for Blue Balls) 0.327 0.284(0.274) (0.269)

Round (/60) −0.595** −0.606***(0.230) (0.231)

Gender (1 = Male) −0.549** −0.562**(0.274) (0.281)

Forced (1 = Yes) −0.142 −0.154(0.134) (0.135)

Result of 1st Draw (1 = Success) 0.008 0.004(0.067) (0.067)

Conflict (1 = Yes) 1.881*** 0.412(0.099) (0.538)

Conflict * High Incentives 0.016 1.939**(0.144) (0.846)

FI 0.127(0.172)

FI * High Incentives −0.023(0.274)

FI * Conflict 0.256***(0.093)

FI * Conflict * High Incentives −0.335**(0.143)

Constant −1.262*** −1.899*(0.322) (1.006)

sI

3

fifc

t

Ahwfn“et

log likelihood −981.021 −975.497Wald test 701.99*** 708.26***

uccessful draws were those resulting in a green ball). As in the previous experiment, data collection proceeded individually.nstructions were as in Achtziger and Alós-Ferrer (2010); a complete set of instructions is available upon request.

.3. Results

Faith in Intuition varied from 3.57 to 8.15 in the sample, with an average of 5.69. Table 5 presents aggregate errorrequencies in the experiment. As in Charness and Levin (2005) and Achtziger and Alós-Ferrer (2010), we observe that errorsn case of conflict (after a first draw from left) are far more frequent than errors in case of alignment (after a first drawrom right). First-draw errors (starting with the left urn when given the choice) are not infrequent, but their magnitude isomparable to those found in Charness and Levin (2005) and Achtziger and Alós-Ferrer (2010).

The experimental data form a strongly balanced panel, with 60 second-draw observations per participant. Table 6 presentshe results of random-effects probit regressions on second-draw decisions (error vs. correct decision).

The first regression model estimates behavioral biases without controlling for Faith in Intuition. As expected (see alsochtziger and Alós-Ferrer, 2010), the existence of a decision conflict between Bayesian updating and the reinforcementeuristic (i.e. a first draw from the left urn) made the occurrence of an error much more likely. The counterbalance dummyas not significant, which can be regarded as a validation of the basic design. Neither the dummy variable for a forced vs. a

ree first draw, nor the dummy capturing success vs. failure in the first draw were significant. Round number had a significant

egative effect on the probability of an error, meaning that errors became less likely over the course of the experiment (theround” variable was transformed to take values in the interval [0, 1]). This is natural in this experiment, since, by design,very single decision results in a win/lose feedback. Concerning personal characteristics, we found a gender effect implyinghat errors were more likely for female participants.

190 C. Alós-Ferrer, S. Hügelschäfer / Journal of Economic Behavior & Organization 84 (2012) 182– 192

The manipulation of incentives (as captured by the treatment dummy) did not significantly affect the likelihood of anerror’s occurrence. The interaction of the treatment dummy with conflict situations was also not significant, i.e. errors weremore likely in case of conflict regardless of whether incentives were high or low (we confirmed this via postestimation).

In order to test for the implications of Faith in Intuition, our second regression model incorporated this variable and theinteractions with the relevant variables. FI in itself did not have a significant effect.

As expected, we found a significantly positive effect of the interaction term for FI and conflict. That is, a high level of FImade errors more likely in situations in which Bayesian updating and the reinforcement heuristic were opposed. Surprisingly,however, we found a significant and negative effect of the three-way interaction between FI, conflict, and high incentives.A postestimation test shows that the total effect of the interaction term for FI and conflict for the high-incentive treatment(i.e. the sum of the interaction term and the three-way interaction) is not significantly different from zero. In other words,higher scores in FI led to more errors in conflict situations in the low-incentive case but not in the high-incentive case.9

3.4. Discussion

Reinforcement is probably one elementary component of what is usually termed “intuitive behavior”. What can be moreintuitive than repeating successful decisions? Unfortunately, this intuition is sometimes wrong. Decision makers with highscores in FI are more likely to overlook this possibility, hence falling prey to the reinforcement heuristic.

However, it is interesting to observe that this effect depends on the magnitude of incentives. The effect is as expected forour low-incentives treatment, but not for the high-incentives one. One might be tempted to conclude that, in the presenceof high incentives, higher scores in FI do not result in a higher reliance on the reinforcement heuristic in case of conflict.However, this would imply lower error rates in the case of high incentives, while in our experiment error rates in bothtreatments are comparable (see Table 5; a t-test on individual error rates in case of conflict does not yield significantdifferences). At this point, we can only hypothesize that several different effects might play a role. For instance, it might wellbe that higher incentives induce many decision makers to control their impulsive reactions more, counteracting intuitivebehavior. On the other hand, in the high-incentives case feedback is based on larger monetary payoffs, and hence the cueon which the reinforcement heuristic acts is stronger. This might produce an activation of the heuristic for lower scores ofFI in the high-incentives case than in the low-incentives case. Of course, these arguments remain speculative at this pointand should be further explored in future studies.

4. Conclusion

In two separate experiments with a large number of decisions per participant, we found evidence of a relationshipbetween a measure of intuitive decision making and economically relevant, well-known behavioral biases. Decision makerswith high scores in the Faith in Intuition scale are more prone to follow the representativeness heuristic (hence relyingexcessively on sample information and ignoring base rates) and to rely on reinforcement-based rules of thumb (win-stay,lose-shift) even when rational behavior would prescribe otherwise (although the latter effect interacts with the magnitudeof incentives).

We found no evidence that more intuitive decision makers follow a conservatism heuristic (ignoring new information).This is interesting since studies using the Cognitive Reflection Test find an increased reliance on this heuristic with lowerCRT scores. However, one needs to remember that Faith in Intuition measures a tendency to favor a certain “thinking style”,while the CRT aims at measuring cognitive abilities.

Taken together, studies such as ours, Oechssler et al. (2009), and Hoppe and Kusterer (2011) pave the way for a practicalresearch program analyzing the influence of individual differences in behavioral biases. Both the Cognitive Reflection Testand Faith in Intuition are quickly administered and easily computed. The former has the advantage of being extremely short,and the disadvantage that it is not possible to reliably administer it twice to the same participant and that it cannot be used ifit becomes common knowledge in the subject pool. Faith in Intuition has the disadvantage of being somewhat more involved(a 15-item questionnaire), but the advantage of being stable in the face of repeated measurement. We are not advocatingone over the other. Rather, we are advocating incorporating several standardized measures of individual heterogeneity inthe behavioral economist’s toolbox. A richer toolbox will allow for a better and swifter understanding of the extent andeconomic relevance of the various behavioral biases exhibited by human decision makers.

Acknowledgements

We thank Alexander Jaudas for excellent research assistance and programming the experiments, and Anja Achtziger,

Georg Granic, Johannes Kern, Fei Shi, and Alexander K. Wagner for helpful comments. Financial support by the Center forPsychoeconomics at the University of Konstanz is gratefully acknowledged.

9 When controlling for FI, the conflict dummy itself was significant for high incentives but not for low incentives. The latter lack of significance shouldnot be interpreted, since in a sense it compares error rates in case of conflict with those in case of alignment for a hypothetical participant with a zero scorein FI, but FI in the sample took values from 3.57 to 8.15. The dummy is positive and highly significant if one e.g. rescales FI to take values from −5 to 5.

A

qK

crTc

111111

R

AA

A

BB

BCCC

C

D

DDE

EEE

E

E

E

E

EFF

FG

G

C. Alós-Ferrer, S. Hügelschäfer / Journal of Economic Behavior & Organization 84 (2012) 182– 192 191

ppendix A. The Faith in Intuition Questionnaire

We used the FI questionnaire as presented by Keller et al. (2000). Questions 3, 5, 6, 8–11, 13, and 15 are translations ofuestions in Epstein et al. (1996); we present here the original English version. The remaining questions were developed byeller et al. (2000) and we give here our own translations into English.

Participants were asked to score their responses to these questions by placing a mark on an analog scale (length: 10 cm),ontinuously ranging from zero (“completely false”) to ten (“completely true”). To create individual FI scores, the participants’esponses across the items were measured, added up, and divided by 15 to produce a total score ranging from zero to ten.he average FI score in our data was 5.35 (SD = 1.19) in Experiment I, respectively 5.69 (SD = 1.11) in Experiment II. Internalonsistency as measured by Cronbach’s alpha was 0.78, respectively 0.74.

1 When I need to form an opinion about an issue, I completely rely on my intuition.2 For most decisions it is reasonable to rely on one’s hunches.3 I am a very intuitive person.4 When it comes to people, I can trust my first impressions.5 I trust my initial feelings about people.6 I believe in trusting my hunches.7 The first idea is often the best one.8 When it comes to trusting people, I usually rely on my gut feelings.9 I can usually feel when a person is right or wrong even if I can’t explain how I know.0 My initial impressions of people are almost always right.1 I am quick to form impressions about people.2 When it comes to buying decisions, I often follow my gut feelings.3 I can typically sense right away when a person is lying.4 If I get lost while driving or cycling, I typically decide spontaneously which direction to take.5 I believe I can judge character pretty well from a person’s appearance.

eferences

chtziger, A., Alós-Ferrer, C., 2010. Fast or rational? A response-times study of Bayesian updating. University of Konstanz, Mimeo.chtziger, A., Alós-Ferrer, C., Hügelschäfer, S., Steinhauser, M., 2012a. Reinforcement vs. rationality: evidence from electrocortical brain activity. University

of Konstanz, Mimeo.chtziger, A., Alós-Ferrer, C., Hügelschäfer, S., Steinhauser, M., 2012b. The neural basis of belief updating and rational decision making. Social Cognitive and

Affective Neuroscience, forthcoming.altagi, B., 2005. Econometric Analysis of Panel Data, 3rd ed. Wiley, Chichester, UK.arberis, N., Thaler, R., 2003. A survey of behavioral finance. In: Constantinides, H., Stulz, R. (Eds.), Handbook of the Economics of Finance. Elsevier,

Amsterdam, The Netherlands, pp. 1051–1121.etsch, C., Kunz, J.J., 2008. Individual strategy preferences and decisional fit. Journal of Behavioral Decision Making 21, 532–555.acioppo, J.T., Petty, R.E., 1982. The need for cognition. Journal of Personality and Social Psychology 42, 116–131.amerer, C.F., 1987. Do biases in probability judgment matter in markets? Experimental evidence. American Economic Review 77, 981–997.amerer, C.F., Hogarth, R.M., 1999. The effects of financial incentives in experiments: a review and capital-labor-production framework. Journal of Risk and

Uncertainty 19, 7–42.harness, G., Levin, D., 2005. When optimal choices feel wrong: a laboratory study of Bayesian updating, complexity, and affect. American Economic Review

95, 1300–1309.anziger, S., Moran, S., Rafaely, V., 2006. The influence of ease of retrieval on judgment as a function of attention to subjective experience. Journal of

Consumer Psychology 16, 191–195.e Long, J., Shleifer, A., Summers, L., Waldman, R., 1990. Noise trader risk in financial markets. Journal of Political Economy 98, 703–738.ougherty, M.R.P., Gettys, C.F., Ogden, E.E., 1999. MINERVA-DM: a memory processes model for judgments of likelihood. Psychological Review 106, 180–209.dwards, W., 1982. Conservatism in human information processing. In: Kleinmuntz, B. (Ed.), Formal Representation of Human Judgment. Wiley, New York,

NY, pp. 17–52.l-Gamal, M.A., Grether, D.M., 1995. Are people Bayesian? Uncovering behavioral strategies. Journal of the American Statistical Association 90, 1137–1145.pstein, S., 1994. Integration of the cognitive and the psychodynamic unconscious. American Psychologist 49, 709–724.pstein, S., Pacini, R., 1999. Some basic issues regarding dual-process theories from the perspective of cognitive-experiential theory. In: Chaiken, S., Trope,

Y. (Eds.), Dual-Process Theories in Social Psychology. Guilford Publications, New York–London, pp. 462–482.pstein, S., Pacini, R., Denes-Raj, V., Heier, H., 1996. Individual differences in intuitive-experiential and analytical-rational thinking styles. Journal of

Personality and Social Psychology 71, 390–405.rev, I., Roth, A.E., 1998. Predicting how people play games: reinforcement learning in experimental games with unique, mixed strategy equilibria. American

Economic Review 88, 848–881.rev, I., Shimonowitch, D., Schurr, A., Hertwig, R., 2008. Base rates: how to make the intuitive mind appreciate or neglect them. In: Plessner, H., Betsch, C.,

Betsch, T. (Eds.), Intuition in Judgment and Decision Making. Lawrence Erlbaum, Mahwah, NJ, pp. 135–148.rev, I., Wallsten, T.S., Budescu, D.V., 1994. Simultaneous over- and underconfidence: the role of error in judgment processes. Psychological Review 101,

519–527.vans, J.S.B.T., 2008. Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology 59, 255–278.iedler, K., 2000. Beware of samples! A cognitive-ecological sampling approach to judgment biases. Psychological Review 107, 659–676.iedler, K., Brinkmann, B., Betsch, T., Wild, B., 2000. A sampling approach to biases in conditional probability judgments: beyond base rate neglect and

statistical format. Journal of Experimental Psychology: General 129, 399–418.rederick, S., 2005. Cognitive reflection and decision making. Journal of Economic Perspectives 19, 25–42.löckner, A., Witteman, C., 2010. Beyond dual-process models: a categorisation of processes underlying intuitive judgement and decision making. Thinking

and Reasoning 16, 1–25.rether, D.M., 1980. Bayes rule as a descriptive model: the representativeness heuristic. Quarterly Journal of Economics 95, 537–557.

192 C. Alós-Ferrer, S. Hügelschäfer / Journal of Economic Behavior & Organization 84 (2012) 182– 192

Grether, D.M., 1992. Testing Bayes rule and the representativeness heuristic: some experimental evidence. Journal of Economic Behavior and Organization17, 31–57.

Hoppe, E.I., Kusterer, D.J., 2011. Behavioral biases and cognitive reflection. Economics Letters 110, 97–100.Jones, M., Sugden, R., 2001. Positive confirmation bias in the acquisition of information. Theory and Decision 50, 59–99.Kahneman, D., Slovic, P., Tversky, A., 1982. Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press, UK.Kahneman, D., Tversky, A., 1972. Subjective probability: a judgment of representativeness. Cognitive Psychology 3, 430–454.Keller, J., Bohner, G., Erb, H.P., 2000. Intuitive und heuristische Urteilsbildung – verschiedene Prozesse? Präsentation einer deutschen Fassung des ‘Rational-

Experiential Inventory’ sowie neuer Selbstberichtskalen zur Heuristiknutzung. Zeitschrift für Sozialpsychologie 31, 87–101.Klaczynski, P.A., Gordon, D.H., Fauth, J., 1997. Goal-oriented critical reasoning and individual differences in critical reasoning biases. Journal of Educational

Psychology 89, 470–485.Mahoney, K.T., Buboltz, W., Levin, I.P., Doverspike, D., Svyantek, D.J., 2011. Individual differences in a within-subjects risky-choice framing study. Personality

and Individual Differences 51, 248–257.Norris, P., Epstein, S., 2011. An experiential thinking style: its facets and relations with objective and subjective criterion measures. Journal of Personality

79, 1043–1079.Oechssler, J., Roider, A., Schmitz, P.W., 2009. Cognitive abilities and behavioral biases. Journal of Economic Behavior and Organization 72, 147–152.Roth, A.E., Erev, I., 1995. Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term. Games and Economic

Behavior 8, 164–212.Shiloh, S., Salton, E., Sharabi, D., 2002. Individual differences in rational and intuitive thinking styles as predictors of heuristic responses and framing effects.

Personality and Individual Differences 32, 415–429.Strack, F., Deutsch, R., 2004. Reflective and impulsive determinants of social behavior. Personality and Social Psychology Review 8, 220–247.Sutton, R.S., Barto, A.G., 1998. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA.Thaler, R.H., Shefrin, H.M., 1981. An economic theory of self-control. Journal of Political Economy 89, 392–406.Thorndike, E.L., 1911. Animal Intelligence: Experimental Studies. MacMillan, NY (see also Hafner Publishing Co., 1970).Toplak, M.E., West, R.F., Stanovich, K.E., 2011. The cognitive reflection test as a predictor of performance on heuristics-and-biases tasks. Memory and

Cognition 39, 1275–1289.Toyosawa, J., Karasawa, K., 2004. Individual differences on judgment using the ratio-bias and the Linda problem: adopting CEST and Japanese version of

REI. The Japanese Journal of Social Psychology 20, 85–92.Tversky, A., Kahneman, D., 1974. Judgment under uncertainty: heuristics and biases. Science 185, 1124–1131.Tversky, A., Kahneman, D., 1983. Extensional versus intuitive reasoning: the conjunction fallacy in probability judgment. Psychological Review 90, 293–315.Zizzo, D.J., Stolarz-Fantino, S., Wen, J., Fantino, E., 2000. A violation of the monotonicity axiom: experimental evidence on the conjunction fallacy. Journal

of Economic Behavior and Organization 41, 263–276.