an offer you can’t refuse? incentives change how we think

71
An Offer You Can’t Refuse? Incentives Change How We Think JOB MARKET PAPER For the most recent version, see http://web.stanford.edu/sambuehl/ Sandro Ambuehl Stanford University * November 28, 2015 Abstract Around the world there are laws that severely restrict incentives for many transactions, such as living kidney donation, even though altruistic participation is applauded. Proponents of such legislation fear that undue inducements would be coercive ; opponents maintain that it merely prevents mutually beneficial transactions. Despite the substantial economic consequences of such laws, empirical evidence on the proponents’ argument is scarce. I present a simple model of costly information acquisition in which incentives skew information demand and thus expectations about the consequences of participation. In a laboratory experiment, I test whether monetary incentives can alter subjects’ expectations about a highly visceral aversive experience (eating whole insects). Indeed, higher incentives make subjects more willing to participate in this experience at any price. A second experiment explicitly shows in a more stylized setting that incentives cause subjects to perceive the same information differently. They make subjects systematically more optimistic about the consequences of the transaction in a way that is inconsistent with Bayesian rationality. Broadly, I show that important concerns by proponents of the current legislation can be understood using the toolkit of economics, and thus can be included in cost-benefit analysis. My work helps bridge a gap between economists on the one hand, and policy makers and ethicists on the other. Keywords: Incentives, Repugnant Transactions, Information Acquisition, Inattention, Experiment JEL Codes: D03, D04, D84 * Department of Economics, 579 Serra Mall, Stanford, CA, 94305, [email protected]. I am deeply indebted to my advisors Muriel Niederle, B. Douglas Bernheim, and Alvin E. Roth. I am extremely grateful to P.J. Healy and to Yan Chen for letting me use their experimental economics laboratories. This work has been approved by the Stanford IRB in protocols 29615 (online experiments) and 34001 (laboratory experiments). Funding from the Stanford Department of Economics is gratefully acknowledged. 1

Upload: adminali

Post on 18-Feb-2016

7.017 views

Category:

Documents


0 download

DESCRIPTION

Stanford University

TRANSCRIPT

Page 1: An Offer You Can’t Refuse? Incentives Change How We Think

An Offer You Can’t Refuse?Incentives Change How We Think

JOB MARKET PAPER

For the most recent version, see

http://web.stanford.edu/∼sambuehl/

Sandro Ambuehl

Stanford University∗

November 28, 2015

Abstract

Around the world there are laws that severely restrict incentives for many transactions, such

as living kidney donation, even though altruistic participation is applauded. Proponents of such

legislation fear that undue inducements would be coercive; opponents maintain that it merely

prevents mutually beneficial transactions. Despite the substantial economic consequences of such

laws, empirical evidence on the proponents’ argument is scarce. I present a simple model of costly

information acquisition in which incentives skew information demand and thus expectations about

the consequences of participation. In a laboratory experiment, I test whether monetary incentives

can alter subjects’ expectations about a highly visceral aversive experience (eating whole insects).

Indeed, higher incentives make subjects more willing to participate in this experience at any

price. A second experiment explicitly shows in a more stylized setting that incentives cause

subjects to perceive the same information differently. They make subjects systematically more

optimistic about the consequences of the transaction in a way that is inconsistent with Bayesian

rationality. Broadly, I show that important concerns by proponents of the current legislation can

be understood using the toolkit of economics, and thus can be included in cost-benefit analysis.

My work helps bridge a gap between economists on the one hand, and policy makers and ethicists

on the other.

Keywords: Incentives, Repugnant Transactions, Information Acquisition, Inattention, Experiment

JEL Codes: D03, D04, D84

∗Department of Economics, 579 Serra Mall, Stanford, CA, 94305, [email protected]. I am deeply indebted to

my advisors Muriel Niederle, B. Douglas Bernheim, and Alvin E. Roth. I am extremely grateful to P.J. Healy and to Yan

Chen for letting me use their experimental economics laboratories. This work has been approved by the Stanford IRB

in protocols 29615 (online experiments) and 34001 (laboratory experiments). Funding from the Stanford Department

of Economics is gratefully acknowledged.

1

Page 2: An Offer You Can’t Refuse? Incentives Change How We Think

[B]enefits ... raise ... the danger that ... prospective participants enroll ... when it might be against

their better judgment and when otherwise they would not do so.

- National Bioethics Advisory Commission (2001)

Russ Roberts: “It’s an interesting issue ... that somehow, when money gets involved, the whole

transaction becomes tainted.” Richard Epstein: “I agree ... a lot of this misapprehension comes from

the fact that people don’t understand the way markets work.”

- EconTalk with guest Richard Epstein (2006)

1 Introduction

Around the world, there are laws that severely limit material incentives for many transactions, most

notably organ donation, surrogate motherhood, human egg donation, and participation in medical

trials. The goal is not to discourage these activities per se; on the contrary, altruistic participation is

often applauded. These laws have led to intense public discourse. (Open Letter To President Obama

(2014), Vatican Radio (2014)). Opponents maintain that they prevent voluntary contracts and thus

impede efficiency (Emanuel (2005), Becker and Elias (2007), Satel and Cronin (2015)). Indeed, the

economic consequences are substantial. For instance, Held, McCormick, Ojo and Roberts (2015)

estimate that the ban on incentives for kidney donation is responsible for the premature death of 5,000

- 10,000 Americans on the waiting list each year, and place the net welfare gains from introducing

a regulated market for kidneys at $46 billion per year. By contrast, proponents and makers of such

laws (including prominent ethicists) worry that undue inducements may be coercive. They argue that

incentives can harm people by distorting decision making (National Bioethics Advisory Commission

(2001), Kanbur (2004), Satz (2010), Grant (2011), Sandel (2012)). Opponents have sometimes been

puzzled by these arguments, and dismissive of these concerns (see the quotes above). In addition to

a lack of a common language, there is a dearth of empirical evidence on the proponents’ claim.

In this paper I study the proponents’ concern that incentives interfere with sound decision making.

I develop a formal model based on costly information acquisition, and provide empirical evidence

that incentives can make individuals systematically more optimistic about the consequences of a

transaction. I do not (and cannot) take a stance on whether incentives for particular transactions

should be limited. Instead, I aim to bridge a gap between disciplines by using standard economic

methodology to address influential existing concerns about the effects of incentives that are widely

held by non-economists.1

1I focus on a single mechanism. I do not attempt to exhaustively address all arguments that have been made againstcertain markets. I also do not attempt to explain why limits on incentives are present in some markets but not in

2

Page 3: An Offer You Can’t Refuse? Incentives Change How We Think

I start by recognizing that many of the transactions for which incentives are limited have uncertain

consequences. For instance, it is hard to know what life with a single kidney is like. Previous work

has (implicitly) assumed that subjects’ beliefs about these consequences are independent of incentives

(Becker and Elias, 2007). In this case, individuals participate in a transaction if the incentive amount is

sufficient compensation for the expected disutility of participation, and does not participate otherwise.

My work, by contrast, considers the case in which acquiring information about the consequences

of a transaction costs time and effort. In this case, incentives have a second effect. They change

individuals’ expectations of the disutility from participation. Specifically, higher incentives decrease

the costs of ex post mistaken participation (a false positive error), as they partially insure against

adverse outcomes. Simultaneously, they increase the opportunity costs of abstention, and thus the cost

of false negative errors. Hence, they affect optimal information acquisition and attention allocation,

and thus expectations. For instance, a person contemplating kidney donation in exchange for an

increased payment may inform herself about the transaction by talking to more previous donors that

she expects are happy with the outcome, and to fewer previous donors that she expects are unhappy.

Does this mean that incentives can harm people? Clearly, regarding the ex ante welfare of Bayes-

rational decision makers, the answer is no.2 Even in this case, however, incentives alter the distribution

of ex post welfare. This is relevant for two reasons. First, both citizens and commentators often care

about ex post outcomes regardless of the ex ante choices that lead to them (Satz (2010), Kanbur (2004),

Andreoni, Aydin, Barton, Bernheim and Naecker (2015)).3 This is evidenced in policies as diverse as

veteran service, personal bankruptcy laws, and emergency medical services, which mitigate adverse

outcomes even if they result from a voluntary decision. Second, a policy that causes widespread ex post

regret may be politically unsustainable no matter whether or not it is ex ante beneficial.4 Regarding

anonymous kidney donation, for instance, Massey et al. (2010) find that 96% of donors report they

would make the same decision again. According to the mechanism I document, introducing incentives

may decrease this percentage (under a condition on the cost-of-information function).

If decision makers are not Bayesian, incentives may have a third effect. They may make individuals

systematically more optimistic, and thus possibly cause them to participate when a decision maker

with unbiased beliefs would have refused the transaction. Hence, increased incentives can make such

agents ex ante worse off.

In order to empirically study these mechanisms, I conduct two experiments. In the first experiment,

subjects trade off a highly visceral aversive experience against money. It allows me to study whether

higher incentives can indeed make people more optimistic about how unpleasant they will find a

others, even if they share some characteristics. My evidence derives from laboratory experiments in two highly specificdecision problems. By theoretically modeling the mechanism I document, I argue that it is present more generally. Theexperiments are not informative, however, about its magnitude in different settings.

2I use the term Bayes-rational to refer to a subjective expected utility maximizer who updates beliefs according toBayes’ law.

3Kanbur (2004) is explicit: “Extreme negative situations for individuals that leave them destitute attract our sym-pathy, no matter that the actions which led to them were freely undertaken.”

4See the movie Eggsploitation (2011) for a recent push along these lines.

3

Page 4: An Offer You Can’t Refuse? Incentives Change How We Think

subjective experience in an arguably naturalistic setting. The second experiment studies behavior in

a more stylized setting. It allows me to control subjects’ objective function and prior beliefs and hence

allows for an explicit test of Bayesian rationality.

Broadly, the experiments show that higher incentives make subjects systematically more opti-

mistic about the consequences of the transaction than lower incentives, in a way that is inconsistent

with Bayesian rationality. They do so through affecting subjects’ information demand and attention

allocation in the direction predicted by the model.

In the first experiment I use cash to induce subjects to eat whole insects, including silkworm pupae,

mealworms, and various species of crickets. This transaction is both aversive and perfectly safe, and

thus feasible in the laboratory. It is also unfamiliar, so that expectations are potentially malleable.

The experiment follows a two-by-two design. One dimension varies the incentive amount offered, the

other varies access to information. Subjects in the treatment group choose to watch one of two videos

about insect-eating, one highlighting the upsides, the other emphasizing the downsides. Subjects in

the control group cannot access any such information. I observe the fraction of subjects who are

willing to eat the insect for the incentive offered. The experiment yields three main results.

First, incentives increase participation in either group, but significantly more so in the treatment

group. Hence, as the model predicts, information gathering amplifies the effect of incentives on uptake.

A corroborating finding is that subjects offered the high incentive significantly more often chose the

video emphasizing the upsides of eating insects than those offered the low incentive.

Second, when offered the high incentive, subjects in the video treatment persuade themselves that

eating insects is less aversive than they would otherwise have thought. To show this, I elicit the

least amount of money for which a subject is willing to eat insects (her willingness-to-accept, WTA).

Incentives significantly decrease WTA in the treatment condition compared to subjects in the control

condition, and thus make them more willing to eat insects at any price when the video option is

available.

Third, subjects do not predict the self-persuasion effect in others, in spite of material incentives

for accurate predictions. Hence, they seem unaware of it. This finding calls into question whether the

effects of incentives result from rational information processing.5

The second experiment allows me to explicitly test for Bayesian rationality. It also replicates the

main qualitative findings of the first experiment in a different environment with a different subject

pool, and thus demonstrates their robustness. In this experiment, I incentivize subjects to risk the

loss a fixed amount of money. Subjects know the ex ante probability of a loss, and thus have correct

prior beliefs. Before deciding whether or not to take that gamble, they scrutinize a picture. That

picture perfectly reveals whether taking the gamble and receiving the incentive amount will lead to a

5Lack of awareness does not disprove the rationality hypothesis, however, as Friedman and Savage’s (1948) famousexample illustrates: An expert billiard player may be unaware of the laws of Newtonian mechanics, but he still strikesballs as if he were.

4

Page 5: An Offer You Can’t Refuse? Incentives Change How We Think

net gain or a net loss, but only at considerable attentional costs. I measure (a monotonic function of)

subjects’ posterior beliefs about the winning probability.

I find that subjects’ behavior is inconsistent with Bayesian information processing. An increase in

incentives leads to a first-order stochastic dominant shift in subjects’ posterior belief, making them

more optimistic. This violates the law of iterated expectations, the hallmark of Bayesian rationality.

The experiment also allows me to study the effects of incentives separately on false positive and

false negative errors. This is relevant since proponents of limits on incentives are primarily concerned

with mistaken participation (false positives).

Specifically, I vary whether subjects can skew their attention allocation in response to incentives,

by telling subjects the incentive they will be given for taking the gamble either before or only after

they examine the picture. I find that subjects in the high incentive treatment who can skew their

attention allocation in response to incentives have a substantially higher false positive rate than those

who cannot, but not a significantly different false negative rate. Hence, knowledge of the incentive

amount must have caused them to search for information that seemingly justified taking the bet, even

when they ultimately lost from taking the gamble.

Overall, my work shows that some of the central concerns voiced by proponents of the current

legislation can be understood and measured using the toolkit of economics, and it presents directional

evidence for the behavioral hypotheses on which these concerns are based. Incentives cause subjects

to seek out different information about the consequences of a transaction, and to perceive the same

information differently. They can make subjects systematically more optimistic in a way that is not

consistent with Bayesian rationality. To reiterate, the paper does not take a stance on normative

questions, but instead aims to foster a common understanding between disciplines to advance debates

about the appropriateness of material incentives.

My research is related to a small literature on repugnant transactions which examines when un-

affected third parties approve of selected transactions (Kahneman, Knetsch and Thaler (1986), Roth

(2007), Leider and Roth (2010), Niederle and Roth (2014), Ambuehl, Niederle and Roth (2015), Elias,

Lacetera and Macis (2015a), Elias et al. (2015b)). It differs from that literature as it studies the

effect of incentives on those whom they target. This paper focuses on the internalities caused by

incentives, and thus also differs from a burgeoning literature on morals and markets. That literature

studies how market institutions cause externalities, and how they affect participants’ willingness to

impose them (Basu (2003), Basu (2007), Falk and Szech (2013), Malmendier and Schmidt (2014),

Bartling, Weber and Yao (2015)). My research examines how incentives shape individuals’ acquisition

and interpretation of additional information about the transaction rather than the inferences indi-

viduals draw from the incentive amount per se. It thus complements the study by Cryder, London,

Volpp and Loewenstein (2010) which shows that prospective medical trial participants interpret the

incentive offered as compensation for greater risk, even though institutional guidelines explicitly dis-

5

Page 6: An Offer You Can’t Refuse? Incentives Change How We Think

courage such compensation.6 It similarly complements the literature on anchoring which shows that

even transparently arbitrary prices can per se influence individuals’ valuations (Ariely, Loewenstein

and Prelec (2003), Alevy, Landry and List (2010), Beggs and Graddy (2009), Bergman, Ellingsen,

Johannesson and Svensson (2010), Maniadis, Tufano and List (2014), Simonson and Drolet (2004),

Sunstein, Kahneman, Schkade and Ritov (2002), Fudenberg, Levine and Maniadis (2012)). This study

also relates to work by Babcock and Loewenstein (1997) and Gneezy, Saccardo, Serra-Garcia and van

Veldhuizen (2015) who find that strategic reasons can bias subjects’ interpretation of information. In

the current paper, by contrast, subjects’ choices affect only their own well-being.

More generally, my research relates three broad literatures. Both its empirical and theoretical

results derive from mechanisms of information demand and attention allocation that are often studied

under the label of rational inattention (Sims (2003), Sims (2006), Woodford (2012), Martin (2012),

Caplin and Dean (2013a), Caplin and Martin (2014), Woodford (2014), Yang (2014), Matejka and

McKay (2015), Caplin and Dean (2015); see Caplin (forthcoming) for a review.) As it studies how

incentives affect expectations, it relates to a vast literature in psychology and economics on motivated

reasoning (Lord, Ross and Lepper (1979), Kunda (1990), Rabin and Schrag (1999), Caplin and Leahy

(2001), Benabou and Tirole (2002), Brunnermeier and Parker (2005), Massey and Wu (2005), Koszegi

(2006), Holt and Smith (2009), Eil and Rao (2011), Moebius, Niederle, Niehaus and Rosenblat (2013),

Gottlieb (2014), DiTella, Perez-Truglia, Babino and Sigman (2015), Huck, Szech and Wenner (2015)).

Finally, by studying how incentives affect the quality of decision making, it contributes to a burgeoning

literature on behavioral welfare economics (Koszegi and Rabin (2008), Bernheim and Rangel (2009),

Ambuehl, Bernheim and Lusardi (2014), Bernheim, Fradkin and Popov (2015); see Bernheim (2009)

for a review).

The remainder of this paper proceeds as follows. Section 2 provides a brief overview of the legisla-

tion that caps incentives. Section 3 presents the conceptual framework. The two experiments are in

sections 4 and 5, respectively. Section 6 discusses policy implications and applications of my findings

to other domains, and Section 7 concludes.

2 Capped Incentives

Here, I briefly review legislation and guidelines that limit or prohibit incentive payments. For all of

the laws reviewed, protecting the person whom incentives would target is an important motivation.

Limits on incentives are not intended to discourage these activities per se. On the contrary, altruistic

participation is often applauded, especially with living kidney donation and participation in medical

trials.

• Human research participants. Preventing coercion is a central objective of the pertinent legisla-

tion (Nuremberg Code, 1949). Incentives are explicitly considered a form of coercion since the

6That paper thus shows that low incentives can make subjects overly optimistic.

6

Page 7: An Offer You Can’t Refuse? Incentives Change How We Think

Belmont Report (1978), the basis of many current laws on medical research ethics. The report

defines that “undue influence ... occurs through an offer of an excessive ... reward or other

overture in order to obtain compliance.” The concern is that an offer may be “so excessively

desirable that it compromises judgment” (Emanuel, 2004). By contrast, the literature applauds

an altruistic desire to contribute to the progress of research (Macklin, 1981).

• Human egg donation. The majority of countries surveyed by the Council of Europe in 1998

prevented egg donations for commercial gain (Council of Europe, 1998).7 Donors undergo sub-

stantial hormonal treatment, and protecting them is a critical aim. The U.S. permits commer-

cial human egg donations, but the Ethics Committee of the American Society for Reproductive

Medicine (2007) recommends that “payments to women providing oocytes should be fair and

not so substantial that they ... lead donors to discount risks”. The committee concludes that

“sums of $5,000 or more require justification and sums above $10,000 are not appropriate”.

• Surrogate motherhood. Many U.S. states strictly limit material benefits for surrogate mothers.

Nevada, New Hampshire and Washington, for instance, prohibit payments to surrogate mothers

except for reimbursement of certain expenses that are explicitly listed in the states’ statutes.8

Protection of the surrogate mother is an important motive. The prospective surrogate mothers’

ability to predict her preferences plays a central role. This is particularly salient for the legislation

in Russia which permits commercial surrogacy, but only for women who already have a child

of their own. Arguably, they are better able to assess the costs and benefits of the transaction

than other women (Svitnev, 2010).

• Living kidney donation. Paid living kidney donation is outlawed in every country of the world,

except for the Islamic Republic of Iran (Rosenberg, 2015b). A frequent argument holds that

incentives would coerce some individuals to sell their kidneys (Choi, Gulati and Posner, 2014)

and that they would distort prospective participants’ assessment of the costs and benefits of the

transaction, possibly to their detriment (Satz (2010), Grant (2011), Kanbur (2004)).

Concerns about incentives are particularly prevalent for transactions involving bodily products

and parts. But they are neither limited to that domain, nor do they fully encompass it. On the one

hand, a variety of laws limit incentives in domains that do not concern the body. For instance, the

U.S. outlaws selling oneself into voluntary slavery (42 U.S. Code §1994), and “excessive payments”

are prohibited also for participants in non-medical experiments. Inducements are outlawed in student

athlete recruiting on account that they would consist undue influence (National Collegiate Athletic

Association (2015), Alabama HSAA (2015), Indiana HSAA (2015), Kentucky HSAA (2014), Michigan

710 of these had regulations allowing some reimbursement of expenses. Others outlawed human ovum donationsentirely (Bundesrepublik Deutschland, 1990).

8Legislation varies widely within the U.S. On one extreme, California and Illinois support commercial surrogacy. Onthe other extreme, Michigan declares any participation in a surrogacy agreement a gross misdemeanor punishable withjail.

7

Page 8: An Offer You Can’t Refuse? Incentives Change How We Think

HSAA (2015)). And the World Council of Churches compels its members not to use material incentives

to induce individuals to change their confession, arguing that incentives would be coercive and impair

religious freedom (Clark, 1996). On the other hand, there are transactions with bodily products

for which no comparable laws apply. Many U.S. states, for instance, explicitly exclude the trade

with human hair from statutory regulations on trade with bodily products and parts.9 Relatedly, no

concerns about paying donors of human feces have surfaced, although donors can earn up to $13,000

a year (Feltman, 2015).10

3 Conceptual framework

In this section, I present a purposefully simple model to understand the effects of incentives when

the consequences of participation in a transaction are uncertain, and when information acquisition is

costly.

The aim of this section is twofold. First, I show how incentives affect information acquisition and

how they change the nature of erroneous decisions (relative to perfect foresight) for rational decision

makers. The model yields comparative statics which, at first sight, might appear like an indication of

non-rational behavior. For instance, the model predicts that higher incentives may increase rational

subjects’ demand for information they expect to encourage participation.11

Second, I explicitly highlight which behaviors are consistent with Bayesian rationality and which

are not; and I derive a criterion to empirically distinguish between them. The distinction is important.

An increase in incentives affects the ex post distribution of welfare in either case, but it can only

possibly ex ante hurt decision makers who deviate from Bayesian rationality.

I first introduce the setup and then discuss its interpretation.

3.1 Setup

An agent decides whether or not to participate in a transaction in exchange for a material incentive

m. The agent is uncertain about the (utility) consequences of participation, which depend on an

unknown state of the world s ∈ G,B. The state is good (s = G) with prior probability µ. If so, the

agent obtains utility πG from participation such that net utility is positive, πG +m > 0. Otherwise,

9For instance Connecticut, the District of Columbia, Illinois, Michigan, and Texas. Trade in human hair is a multi-million dollar industry (Khaleeli, 2012).

10There are other transactions that have raised concerns about the effects of incentives on the provider, but they differto the extent that voluntary participation would not necessarily be applauded. An important example is prostitution.While some commentators are concerned with the well-being of sex workers, the idea of a woman sleeping with a largenumber of anonymous men has historically met disapproval even if no material incentives are involved. Less commontransactions include forehead advertising, in which an individual is paid a significant amount of money to have anadvertisement tattooed on their forehead or other body part, and virginity auctions (Sandel, 2012). Incentives for theselatter transactions are not regulated, possibly because of their infrequent occurrence.

11There are other domains in which behavior at first sight appears non-rational, even though it is consistent withBayesian updating. An example in which 60% of all drivers rationally consider themselves as above-average drivers isin Benoıt and Dubra (2011).

8

Page 9: An Offer You Can’t Refuse? Incentives Change How We Think

the state is bad (s = B). In that case, participation leads to utility πB such that net utility is negative

πB + m < 0.12 If the agent does not participate, he receives utility 0. The agent observes the state

only after he has decided whether or not to participate.

Before the agent decides whether or not to participate, he can acquire noisy information about the

state at a cost. Formally, the agent selects an information structure I from a menu I. An information

structure I is a pair of probability distributions (νIG, νIB) over some set SI of signal realizations. If the

agent chooses information structure I and the state is good, then the signal is distributed according to

νIG; if the state is bad, it is distributed according to νIB . From observing a signal realization, the agent

can infer a posterior probability about the state by applying Bayes’ rule. He then decides whether

or not to participate in the transaction. Because the agent can only decide between accepting and

rejecting the offer, I assume that all information structures the agent has access to have two possible

signal realizations, SI = accept, reject for all I ∈ I. For an optimizing agent, this assumption

is without loss, as long as strictly more informative information structures are strictly more costly

(Matejka and McKay, 2015).13

Hence, the agent can be modeled as choosing a probability pG of agreeing to participate in the

transaction if the state of the world is good, and a probability pB of participating if it is bad, with

pG ≥ pB .14 pB is the probability of false positives (the agent participates even though under perfect

foresight he would have abstained), and (1 − pG) is the probability of false negatives (the agent

abstains, even though under perfect foresight he would have participated). To illustrate, consider an

agent who chooses choice probabilities (pG, pB) = (0.9, 0.1). This agent has better information about

the state of the world than an agent who chooses (pG, pB) = (0.8, 0.2); both his false positive and false

negative probabilities are lower. An agent who choses (pG, pB) = (1, 0) is perfectly informed about

the state.

Information acquisition is costly. Specifically, the cost of the information associated with a pair

of state-contingent choice probabilities (pG, pB) is given by the convex, real valued, differentiable

function λ · c(pG, pB), where λ > 0 is a constant. It is increasing in the first argument, and decreasing

in the second. These conditions encompass Shannon mutual information costs, a standard assumption

in the literature on rational inattention (Sims (2003), Sims (2006), Caplin and Dean (2013b), Matejka

and McKay (2015)).15

12Throughout I assume quasilinear preferences. Appendix E considers the more general case in which the state s canhave any distribution on the real line, and ex-post utility is given by u(s) +m.

13Here, “more informative” refers to the Blackwell (1953) ordering. To see the argument, suppose that the agentchooses action a at two different signal realizations σa1 and σa2 . We can then garble this information structure in a waythat prevents the agent from distinguishing these two signal realizations. This makes the information structure strictlyless informative, and thus strictly less costly. Hence, the initial information structure cannot have been cost-minimizing.

14A pair (pG, pB) with pG < pB cannot be optimal. The agent first observes a signal, and then chooses which actionto take. If an information structure allows the agent to implement (pG, pB) = (x, y) with x < y, he can costlesslyimprove his expected utility by instead implementing (pG, pB) = (y, x). To do so, he simply interprets each signalrealization in the opposite way. Formally, this is the no improving action switches condition of Caplin and Dean (2015).

15In this setting, the Shannon mutual information cost function takes the following form. For state-dependentparticipation probabilities pG, pB let p = µpG+(1−µ)pB denote the ex ante participation probabilities, and γG = pGµ

p

and γB =pB(1−µ)

pthe agent’s posterior belief about the event s = G if he has observed a signal that makes him

9

Page 10: An Offer You Can’t Refuse? Incentives Change How We Think

3.2 Discussion

Interpretation of “information”. An individual deciding whether to participate in a transaction

may be unsure about the utility from participation. Thus, there are two aspects of information

acquisition. First, an individual can acquire objective information, for instance on the medical risks

associated with participation. Second, he can also acquire information on how participation affects

his subjective well-being, both directly, and via the implications of medical consequences. He may

consider, for instance, how he will feel after having saved someone’s life by donating a kidney, or how

a particular medical consequence will restrain the activities he may want to pursue.

How to choose kind and amount of information in practice. There are many ways to select

between different kinds of information in practice. An individual contemplating whether to donate

a kidney, for instance, can select between different amounts of information by deciding how many

previous donors to consult, and she may select between different kinds of information by focusing

her efforts on previous donors whom she believes are happy with their decision, or on those who she

thinks are not.

More formally, we can imagine a decision maker who gathers information sequentially, and has

the following decision rule. He decides to participate as soon as his posterior that the state is good

is sufficiently high, and he decides to abstain as soon as the posterior is sufficiently low. Otherwise,

he continues searching for information. By choosing these bounds, the agent implicitly chooses the

state-contingent participation probabilities pG and pB . (This corresponds to a Wald (1947) sequential

probability ratio test, or to its continuous cousin, the drift-diffusion model (Ratcliff (1978), see Fehr

and Rangel (2011) and Bogacz, Brown, Moehlis, Holmes and Cohen (2006) for reviews).)

3.3 Analysis

3.3.1 Bayesian Decision Makers

If the decision maker selects state-dependent choice probabilities (pG, pB), he obtains the upside payoff

(πG+m) > 0 with probability µ·pG, and the downside payoff (πB+m) < 0 with probability (1−µ)·pB .

With the remaining probability he does not participate in the transaction and obtains 0. Hence, his ex

ante expected utility, excluding costs of information, is U(pG, pB ;m) = µpG(πG+m)+(1−µ)pB(πB+

m). The decision maker thus chooses the pair of probabilities (pG, pB) to solve the following problem.

maxpG,pB

U(pG, pB ;m)− λc(pG, pB) (1)

participate and abstain, respectively. Let h denote the binary entropy function, h(x) = x log(x) + (1 − x) log(1 − x).Then, Shannon mutual information costs are given by c(pG, pB) = h(µ)− E[h(γs)].

10

Page 11: An Offer You Can’t Refuse? Incentives Change How We Think

How does the solution to this problem depend on the monetary incentive m? The answer is

most easily seen graphically. Figure 1 depicts (a part of) the agent’s choice set.16 The vertical axis

depicts pG, the probability of accepting if the state is good; the horizontal axis depicts (1 − pB),

the probability of rejecting if the state is bad. Hence, the further to the north the bundle the agent

chooses, the smaller is the probability of false negatives. The further to the east the bundle he chooses,

the smaller is the probability of false positives. I separately plot the level curves of U and those of

the cost of information function c on this space. The level curves of U are straight (and parallel)

lines, since U is a linear combination of pG, (1− pB) and a constant.17 U increases towards the upper

right of the graph. The level curves of c are concave, since c is convex. For an initial level of material

compensation m, the agent’s optimal choice may be a bundle such as point A in this figure.

The total effect of an increase in m derives from a substitution effect and a stakes effect. We

obtain the former by temporarily interpreting problem (1) as the Lagrangian to the maximization

of U subject to a constraint on the costs of information acquisition, c(pG, pB) = c for some fixed

c. An increase in m raises the weight of the good pG in the utility function U and lowers that of

(1− pB). Intuitively, the increase in the weight on pG reflects the increased opportunity cost of non-

participation, whereas the decrease in the weight on (1− pB) captures the idea that higher incentives

partially insure against adverse outcomes. Hence, the indifference curves tilt to the left,18 and the

constrained optimum shifts to the northwest; for instance to a bundle such as point B in figure 1. The

agent now takes greater care avoid false negatives and is more tolerant of false positives. He acquires

a different kind of information.

An increase in m not only changes the relative cost of false negatives and false positives, it also

changes the total stakes of this decision. Hence, the agent may choose to spend a different amount

of resources on information acquisition. If the increase in m increases the stakes of the decision, the

agent will acquire a larger amount of information, and his optimal bundle will move to the northeast,

for instance to a bundle such as point C in figure 1.19

Distribution of ex post outcomes. There are ethicists and policy makers who worry that in-

centives may cause individuals to participate who ex post regret this decision (false positives); they

are typically less concerned about individuals who do not participate (false negatives). Whether an

increase in the incentive increases the false positive rate depends on whether the substitution effect

exceeds the stakes effect.

Formally, an increase in the incentive payment m increases the false positive rate pB if and only if

c has decreasing differences (a sufficient condition is that the cross-derivative of c is negative globally),

as follows directly from Topkis’ theorem (Milgrom and Shannon, 1994). This condition means that

16The agent’s choice set is (pG, pB)|1 ≥ pG ≥ pB ≥ 0. For ease of exposition only a subset is depicted.17U can be written as U = µpG(πG +m)− (1− µ)(1− pB)(πB +m) + (1− µ)(πB +m)18The slope of an indifference curve is given by dpG

d(1−pB)=

(1−µ)µ

πB+mπG+m

< 0. Because πG > πB , this increases in m.19Whether the agent demand more or less information with an increase in incentives depends on parameters.

11

Page 12: An Offer You Can’t Refuse? Incentives Change How We Think

!!!!!

1 − !!!

!! !

1!

1!

½!!

½!!

A!

B!

C!

Figure 1: Effects of an increase in the incentive amount m. The horizontal axis plots the 1 − pB ,the probability that the agent rejects if the state is bad, or one minus the false positive probability.The vertical axis plots pG, the probability that the agent accepts if the state is good, or one minusthe false negative probability. The choice set is (pG, pB) ∈ [0, 1]2 : pG ≥ pB. For better visibilityonly a part of this choice set is plotted here. Straight lines represent indifference curves of a Bayesiandecision maker for different incentive amounts. Curved lines are iso-cost functions. The black arrowsindicate the substitution effect. The red (dashed) arrows indicate the stakes effect.

12

Page 13: An Offer You Can’t Refuse? Incentives Change How We Think

a marginal decrease in the false positive rate is costlier the lower the false negative rate, and vice

versa.20

Both policy makers concerned with political sustainability and ethicists may be interested in the

effects of incentives if a stronger condition holds; namely that amongst those who decided to participate,

higher incentives increase the fraction of subjects who ex post regret their decision. An equivalent

statement is that higher incentives decrease the posterior belief that the state is good, conditional on

participating. It necessitates that the substitution effect outweighs the stakes effect by a sufficient

amount. Graphically, the isoquants of the cost of information function cannot be too curved. A

sufficient condition is that the costs of information are proportional to Shannon mutual information,

the workhorse model of rational inattention theory.21

The following proposition summarizes this discussion. The proof is in appendix F.

Proposition 1. An increase in the material incentive m

(i) increases the false positive rate P (participate|s = B) = pB if ∂2c∂pG∂pB

(pG, pB) < 0 for all pG, pB.

(ii) decreases the posterior probability that the state is good conditional on participating, P (s =

G|participate) = µpGµpG+(1−µ)pB , if c is given by Shannon mutual information.

Experimental predictions. The experiment in section 5 directly reveals how an increase in incen-

tives affect the false positive and false negative rates. This model also has testable implications when

false positives and false negatives cannot be observed directly. The experiment in section 4 test these.

First, it predicts that an increase in incentives has a stronger effect if subjects can adjust their

information demand and attention allocation in response to incentives than if they cannot. To see

this, note that in the model as outlined here, a change in incentives increases participation only if

subjects can adjust their information demand (unless the change in incentives is so large that it affects

whether or not the subject’s participation decision is dependent the signal realization). In an empirical

setting, incentives may change participation for other reasons (such as heterogeneous preferences), but

the effect will be larger if subjects can also adjust the false positive and false negative probabilities.

Prediction 1. An increase in incentives has a larger effect on the rate of participation if subjects can

adjust their information demand in response to incentives than if they cannot.

Second, the model predicts how information demand depends on incentives. Someone who faces a

higher incentive for participation will demand information which she ex ante believes is more likely to

make her participate. The reason is that such a person optimally chooses a higher false positive prob-

ability, and a lower false negative probability, and thus a higher ex ante probability of participating.

20Appendix E shows that the cross-derivative of c is negative globally if c is posterior-separable in the sense of Caplinand Dean (2013b).

21Caplin and Dean (2013b) provide a behavioral characterization of mutual information costs. First, optimal posteriorsare independent of priors. (This can be viewed as a dynamic consistency condition; the posterior beliefs a sequentialinformation gatherer optimally attains before making a decision should be independent of his current posterior.) Second,the relation of different realizations of optimal posterior beliefs is independent of certain transformations of utilities.

13

Page 14: An Offer You Can’t Refuse? Incentives Change How We Think

If subjects are presented with an explicit selection of pieces of information, their choice should reflect

this.

Prediction 2. With higher incentives subjects demand more information they expect to encourage

participation, and less they expect to discourage it.

3.3.2 Non-Bayesian Decision Makers

The model so far has focused Bayes-rational agents. Clearly, they cannot be made ex ante worse off

by an increase in material incentives. Non-Bayesian decision makers, by contrast, can ex ante suffer

from an increase in incentives, even if they voluntarily decide about participation. This happens if

the increase leads individuals to participate who abstained before the increase, and who would still

abstain after the increase if they held unbiased beliefs. The latter means that participation is worse

than non-participation. It requires that the individual is overly optimistic, either because she was so

from the start,22 or because incentives made her so.23

I first develop a criterion to test whether incentives make subjects more optimistic in a non-

Bayesian way. I then consider subjects who exhibit a belief updating bias documented in existing

research in behavioral economics and psychology. I show that higher incentives make such subjects

systematically more optimistic if their demand for information directionally follows the prediction of

the rational model.

Testing Bayesian rationality. A test of whether incentives cause deviations from Bayesian updat-

ing faces two challenges. First, it should be applicable in settings in which the demand for information

and the allocation of attention are endogenous. Second, neither of these may be observed directly.

My solution is to test an implication of the law of iterated expectations, the hallmark of Bayesian

rationality. That law demands that a subjects’ expected posterior equals the prior. Hence, no treat-

ment variation may induce a first order stochastic dominance relation between posterior beliefs, even

if it leads the agent to select different information structures. Since first order stochastic dominance

is preserved under monotonic transformations, this further implies that no strictly monotonic func-

tion of posterior beliefs (such as the least amount of money for which one would participate in the

transaction) may vary with incentives in the first order.24

22van den Steen (2004) argues that when people can choose from a range of possible actions and have heterogenouspriors, they will rationally tend towards selecting those actions for whom their prior is larger. Therefore, conditionalon having chosen a given action, subjects are overconfident about how good that action is.

23Increasing incentives for a voluntary decision cannot make overly pessimistic agents worse off. Such agents maymerely fail to take advantage of an opportunity that might increase their ex ante expected utility. Moreover in domainssuch as kidney donation or surrogate motherhood, the consequences of participation are typically irreversible. Bycontrast, opportunities to participate usually do not disappear, so that mistaken abstention is reversible.

24A merely weakly monotonic function of posteriors may vary in the first order. An example is the decision whetheror not to participate in the transaction, which is weakly monotonic in posterior beliefs. If the treatment variationconsists in giving some decision makers the opportunity to skew their information demand, but not others, then theparticipation rate may differ across these two groups even if all decision makers are Bayes-rational.

14

Page 15: An Offer You Can’t Refuse? Incentives Change How We Think

Proposition 2. For Bayesian decision makers, no treatment variation may lead to a first-order

stochastic dominance ordering in any strictly monotonic function of posterior beliefs.

Alternative hypothesis. Some proponents of limits on incentives are concerned that incentives

cause overly optimistic beliefs. By incorporating a bias documented in the existing literature into the

costly information acquisition model, I provide a theoretical foundation for these concerns.25

The model predicts that higher incentives will lead subjects to acquire more highly skewed informa-

tion. Less than fully rational subjects may skew their information demand in the predicted direction

even if they fail to choose the predicted magnitude.26 Correctly interpreting skewed information is

difficult, however, and requires that the subject correctly accounts for magnitudes. Previous literature

shows that many people are unable to do this; they typically interpret skewed information as if it

were less skewed than it is (Slowiaczek, Klayman, Sherman and Skov (1992), Brenner, Koehler and

Tversky (1996), McKenzie (2006)).27

If subjects skew information demand in the predicted direction, but fail to fully account for this

when interpreting the information obtained, higher incentives lead to systematically more optimistic

beliefs. The reason is the following. Under the condition in proposition 1 (i), higher incentives lead

rational subjects to choose an information structure with an increased likelihood of producing the

accept signal in either state of the world. Such an information structure is more highly left-skewed.

Because the agent now expects to observe that signal more often, he should, upon observing it, be less

confident that the state is good. If he interprets information as if it was less skewed than it actually

is, this decrease in confidence will be attenuated compared to the Bayesian. Hence, his confidence

after observing a good signal from a more left-skewed information structure will drop by less than the

increased skew would warrant. Similarly, his confidence after observing a bad signal will increase by

less than the increased left-skew would warrant, so his optimism will be higher than the Bayesian’s

also in that case.

Formally, suppose a decision maker perceives information structure (pG, pB) as (pG, pB) =(βpG+

(1− β)p, βpB + (1− β)(1− p)), with p = pG+(1−pB)

2 , and β ∈ (0, 1). Let γG = pGµp and γB = pB(1−µ)

p

be a rational agent’s posterior belief about the event s = G after observing a good and bad signal,

respectively. Let γG and γB denote the respective posteriors for the behavioral agent. Fix some

25Incentives can affect through other mechanisms. For instance, subjects may choose beliefs by optimally trading offgains from anticipatory utility against losses from suboptimal decision making (Brunnermeier and Parker, 2005).

26Existing evidence in psychology demonstrates that subjects tend to seek information to confirm rather than dis-confirm maintained hypotheses (see Nickerson (1998) for a review). If high (low) incentives cause subjects start theirinquiry from the hypothesis that the utility maximizing action is to participate (abstain) and demand information toconfirm this hypothesis, their information demand will be skewed as in the rational model.

27Within a sequential information acquisition framework, this is also predicted if subjects overinfer from weak signals,and underinfer from strong signals, as has been demonstrated in Ambuehl and Li (2015), Griffin and Tversky (1992),Massey and Wu (2005), Hoffman (2014). The reason is the following. With higher incentives the subjects is willing toaccept already at a less extreme posterior (that the state is good), but asks for an even more extreme posterior (thatthe state is bad) before he is willing to reject. Therefore, the subject will see longer sequences of information (strongersignals) before he rejects, and shorter sequences of information (weaker signals) before he accepts when incentives arehigher. If he overinfers from weak signals, he will accept even sooner than he would like to; if he underinfers from strongsignals, he will reject even later than he would like to.

15

Page 16: An Offer You Can’t Refuse? Incentives Change How We Think

information structure (p′G, p′B) and let information structure (pG, pB) = (p′G + δ, p′B + δ) for some

δ ∈ [−1, 1] such that (pG, pB) ∈ [0, 1]2. An increase in δ increases the left-skew of (pG, pB). We then

have the following proposition

Proposition 3. The difference between the non-Bayesian and the Bayesian posterior conditional on

accepting the transaction, P (s = G|accept)−P (s = G|accept) = γG−γG, is increasing in the left-skew

δ of the information structure. The same is true of the difference between the non-Bayesian and the

Bayesian posterior conditional on rejecting the transaction, P (s = G|reject) − P (s = G|reject) =

γB − γB.

In words, the proposition shows that if a behavioral agent chooses an information structure with

a more pronounced left-skew δ, his posterior after observing a good signal will exceed that of the

rational decision maker by a larger amount. Hence, if an increase in the incentive amount m causes

a behavioral decision maker to choose an information structure with a more pronounced left-skew, he

will be systematically more optimistic than he would have been with a lower incentive amount.28

4 Experiment: An Aversive, Visceral Experience

Proponents of limits on incentives worry that incentives may interfere with sound decision making. In

this section, I conduct a laboratory experiment to test whether incentives can indeed affect subjects’

expectations about how unpleasant they will find a highly visceral, aversive experience.

I incentivize participants to eat whole insects. This is a suitable transaction for this study, for

several reasons. First, it is unfamiliar as well as rather intense to most subjects.29 This ensures that

their expectations are potentially malleable, and that they have an incentive to inform themselves

about the transaction. Second, there are no externalities. Hence, subjects’ expectations about the

outcome of the transaction exclusively concern their own experience; there are no social concerns that

could act as a confound. Third, it satisfies the constraints of a laboratory experiment. Insects are

commercially processed for human consumption in certified facilities, eating them is harmless, and

paying for people to eat them is legal.

28This is not the only model through which incentives can affect optimism, the model by Brunnermeier and Parker(2005) is another. In that model, agents choose prior beliefs by optimally trading off anticipatory utility against the costsof possibly making suboptimal due to distorted beliefs. Because higher incentives increase the the marginal anticipatoryutility the agent can gain from increasing his prior, they may, in some specifications, lead to more optimistic beliefs.The many other ways in which subjects could deviate from Bayesian rationality, do not have direct implications on howincentives affect optimism (Peterson and Beach (1967), Tversky and Kahneman (1974), Grether (1980), Grether (1992),Rabin (1994), El-Gamal and Grether (1995), Rabin and Schrag (1999), Massey and Wu (2005), Holt and Smith (2009),Ambuehl and Li (2015)). A closely related theoretical literature considers how subjects manipulate their beliefs, butimpose Bayesian rationality (Benabou and Tirole (2002), Koszegi (2006), Caplin and Leahy (2001), Gottlieb (2014)).Manipulation is achieved, for instance, by choosing to forget undesired signal realizations. Formally, this is similar tothe choice of an information structure in the present setting.

29Subjects frequently reported that this was one of the most interesting experiments they had participated in. Oc-casionally, subjects reported that the experiment was “stressful” or that the “insects were scary”. Even in countriessuch as China, Thailand, and Mexico, insect-eating is associated with particular regions and / or communities, ratherthan frequently practiced by a wide majority. In my data, Asians and Hispanics are neither more nor less willing to eatinsects than Caucasians.

16

Page 17: An Offer You Can’t Refuse? Incentives Change How We Think

4.1 Design

Structure. The experiment follows a 2×2-across subjects design. The first dimension varies whether

a subject is offered a $3 or $30 incentive for eating an insect (the low and high incentive conditions,

respectively). The second dimension varies whether or not a subject selects and watches a video about

insects as food (the video and no video conditions, respectively). This allows me to test the prediction

that endogenous information acquisition amplifies the effect of incentives on takeup (prediction 1 in

section 3). Subjects in the video condition choose between two videos entitled “Why you may want

to eat insects” (the pro-video) and “Why you may not want to eat insects” (the con-video).30 I can

therefore test whether higher incentives skew subjects’ demand towards information that encourages

rather than discourages participation (prediction 2 in section 3).

This study focuses on the effects of information acquired due to incentives, rather than on informa-

tion conveyed by incentives. Hence, the design minimizes contextual inference. Specifically, subjects

are aware about both payment amounts, and that they are randomly assigned to one of them. There-

fore, they cannot rationally interpret the incentive they are given as a signal about the experience of

eating an insect. (By contrast, subjects who watch a video are not aware that others do not, and vice

versa.)

To study subjects’ expectations about the unpleasantness of eating insects, I measure the least

amount of money for which they are willing to eat it (their WTA). Additional decisions are necessary

to measure this, since the effect of incentives on expectations cannot be determined by comparing the

decisions some subjects make at a low incentive to those others make at a high incentive. Specifically,

subjects reveal their WTA by filling in multiple decision lists. In each of them they decide, for a

variety of compensation amounts, whether or not to eat an insect in exchange for that amount. All

choices are payoff relevant; one of them is randomly selected for implementation at the end of the

experiment (see below for details).

Because subjects reveal their WTA after having been promised either a low or a high incentive

payment, their choices may not only be affected by the information acquired due to that incentive, but

also by anchoring (see section 1 for references). The anchoring hypothesis predicts that subjects in the

high incentive condition will reveal a higher WTA than those in the low incentive condition. Such an

effect may mask subjects’ attempts to persuade themselves that consuming insects is tolerable when

incentives are high. The experimental design allows me to difference out anchoring. Self-persuasion

30The pro and con videos each list reasons for and against eating insects. The reasons stated in the pro-video are 1.Insects can be yummy, 2. Insects are a highly nutritious protein source, 3. Our objection to eating insects is arbitrary,4. Insects are more sustainable than chicken, pork, or beef, 5. We already eat insects all the time. The reasons statedin the con-video are 1. Some cultures eat insects. But to those of us who are not used to it ... see for yourself, 2.Insects have many body parts. Most of those we do not usually eat in other animals, 3. When you eat an insect, youeat all of it. In particular its digestive system, including its stomach, intestine, rectum, anus and whatever partiallydigested food is still in there, 4. We tend to associate insects with death and disease. Even if we know that eating someinsects is harmless, this association is difficult to overcome. The videos are available on https://youtu.be/HiNnbYuuRcA

(“Why you may want to eat insects”) and https://youtu.be/ii4YSGOEcRY (“Why you may not want to eat insects”).Transcriptions are in Appendix A.

17

Page 18: An Offer You Can’t Refuse? Incentives Change How We Think

effects should be more pronounced when subjects can acquire information about the transaction (in

the video condition) than when they cannot (in the no video condition). If anchoring effects are

unchanged across the video and no-video conditions, the difference-in-difference in WTA identifies the

self-persuasion effect.

Timeline. The experiment proceeds in 7 steps, which are summarized in Table 1.

In the first step, subjects learn the incentive amount that they are assigned to, and that they will

decide, for each of five food items, whether to eat the item in exchange for that amount. They then

learn that all of the food items are whole insects that are either baked, or cooked and dehydrated, and

produced for human consumption. They do not actually make these decisions until the fourth step of

the experiment. But the knowledge of the incentive they will be offered may affect their behavior in

steps 2 and 3.

Only subjects in the video condition participate in the second step. They each select one of the two

videos, and watch it. The titles, and the approximate length of 6 minutes is all the information subjects

have about them. Each subject in the video condition must view one of the videos, and nobody can

watch both. Because the videos are relatively long and contain significant detail, incentives may also

affect which parts of the video the subjects pay attention to.

Subjects in the video condition also select at least four out of a selection of 9 video clips. This

reveals whether incentives affect the amount of information demanded. The clips are grouped in bins

of three named “Reasons for eating insects”, “Reasons against eating insects”, “Other information

about eating insects”. Subjects know that they will either watch the selected 6 minute video, or all

the clips they selected, but not both. They also know that the chance of the former is 97%.31

In the third step, subjects fill in five multiple decision lists, one for each of the five insect species

in column 1 of Table 2. The compensation amount p ranges from $0 to $60 in 21 increasingly larger

steps.32 Subjects select the least amount for which they are willing to eat the food item by clicking

on the respective line; the remaining choices are filled in automatically. These choices reveal subjects’

willingness-to-accept (WTA) to eat each of the insects.

In the fourth step, subjects make the decisions they were promised in step 1 (the treatment deci-

sions). Subjects in the high (low) incentive condition decide, for each of the insects, whether or not

to eat that insect in exchange for $30 ($3).

31The probability that the choice of clips will be implemented is low for two reasons. First, the focus of this experimentis on the kind of information, rather than on the amount. A low implementation probability of the clips minimizes thechance that behavioral effects are driven by decisions concerning the amount rather than kind of information. Second,there are many possible selections of video clips, each of which could potentially differently affect behavior. By contrast,there are only two selections of 6-minute videos. Hence, I expected that a low implementation probability of theclips would lead to higher statistical power. To incentivize the decision, the implementation probability is nonethelesspositive.

32 The amounts are 0, 1, 2, 3, 4, 6, 8, 10, 12.5, 15, 17.5, 20, 22.5, 25, 27.5, 30, 33, 36, 39, 44, 50, 60. Resolution isfiner at lower levels since the distribution of WTA is positively skewed. The amount $3 was not included in the decisionlists for the first 78 Stanford subjects.

18

Page 19: An Offer You Can’t Refuse? Incentives Change How We Think

Implementation probability

1. Learn incentive that will be offered in step 4 to eat an insect: $3 or $302. Video treatment only: Select video to watch and watch it.3. Fill in 5 multiple price lists, one for each of 5 species. 7%4. Make the decision announced in step 1, for each species of insect. 80%5. Insects handed out. Filler task: CRT, Raven’s matrices.6. Fill in 5 multiple price lists, one for each species. 7%7. Predict others’ WTA for each species, and each payment condition. 6%

Table 1: Experiment timeline. Instructions for steps 1 through 6 are read out aloud in the beginningof the experiment. Instructions for step 7 are displayed on the subjects’ screens right before it starts.

Subjects have no information about the food items up to and including step 4, except for a

verbal description and, potentially, whatever information was contained in the video they had chosen.

This motivates subjects in the video treatment to carefully decide which video to watch, and to pay

attention.

I test the persistence of the effects of incentives by measuring subjects’ WTA both before they

have seen the insects (in step 4), and after. Thus, in the fifth step, all subjects receive five containers.

Each is filled with insects and a folded piece of paper with a code. Subjects must enter all codes into

the computer. This forces them to open each container, remove the label from within and thus view

and smell each of the food items. The subjects are encouraged to closely examine each specimen. (See

appendix A for pictures of the insects.) As a filler task during the handing out of the insects, subjects

complete an extended version of the Cognitive Response Scale (Toplak, West and Stanovich, 2014),

and 24 of Raven’s (1960) standard progressive matrices.33

In order to measure subjects’ awareness of the effects of incentives on others, the seventh step asks

them to predict the WTA of previous participants for each of the insects. Subjects make separate

predictions for others in the $3 and in the $30 incentive conditions.34 They only predict the WTA of

previous participants in the same video condition as themselves. Subjects only learn that they will

make these predictions right before they start with this step, so their own choices in previous steps

are not affected by considerations of how others would decide.

Payment and execution of consumption decisions. The incentive scheme is chosen such that

subjects find it optimal to reveal their genuine preferences in each decision they make. Specifically, at

the very end of the experiment, one of each subjects’ decisions is randomly chosen for implementation.

That decision entirely determines her payment and consumption of insects. With 80% probability one

of the five treatment decisions (which are based on the promised incentive amount) is selected. This

33Subjects complete sets D and E.34Each participant first makes a prediction for an average participant. They then separately predict the WTA of those

who were offered $3 and for those who were offered $30, respectively. Subjects always predict the average participantfirst, but then answer first about either the participants in the $3 condition, or about participants in the $30 condition,with equal probability.

19

Page 20: An Offer You Can’t Refuse? Incentives Change How We Think

probability is high so that subjects’ thinking about the experience of eating insects is influenced

primarily by the incentive amount they are offered for those decisions, rather than by those in the

multiple price lists. With the remaining 20% probability one of the remaining decisions is selected for

implementation, as detailed in Table 1.

Subjects make some consumption decisions before having seen the actual insects, and thus may

be unpleasantly surprised. Participants cannot be forced to ingest insects. To nevertheless incentivize

them to reveal their genuine preferences during the experiment, reneging on a decision made during the

experiment is costly. Each subject receives a completion payment of $35. But someone who reneges

not only forfeits whatever she would have received for eating the insect, she also see her completion

payment reduced by $20. Hence, somebody who accepts an offer and subsequently reneges would have

been better off if she had never encountered the offer. (Subjects for whom a decision is selected in

which they have chosen not to eat the insect cannot change their decision.) Subjects know this from

the outset of the experiment.

A subjects’ payment from the experiment may also be determined by one of her predictions. If so,

she will not consume any insects, and her completion payment is reduced by $0.50 for each $1 from

which the prediction differs from the truth.35

All subjects know that consumption of food items occurs in a visually secluded space in the presence

of only the experimenter who ensures that the participant completely consumes the assigned insect.

Hence, social considerations are absent.

4.2 Implementation and Preliminary Analysis

A total of 671 participants participated in one of 39 sessions in May, June, and July 2015 that

lasted about 2.5 hours each. In each session both payment conditions were present, but either all

or no subjects in a session were in the video condition. A large number of subjects is required

since individuals’ willingness to eat insects is highly heterogenous. I ran all treatments at Stanford

University (110 subjects), the Ohio State University (499 subjects), and at the University of Michigan

(62 subjects) using an interface coded in Qualtrics.36

I recruited subjects using the universities’ experimental economics participant databases.37 The

invitation emails informed prospective participants that the experiment would involve the consumption

of food items on the spot, and asked recipients not to participate if they have food allergies, are

vegetarian or vegan, or eat kosher or halal. The recruiting emails did not mention insects.38

35A subject thus maximizes her expected payoff by stating the median of her belief distribution of the WTA ofprevious participants.

36I had expected to enroll several hundred subjects at the University of Michigan, but only 62 were available.37The 78 Stanford students who first participated in this experiment were not given any decisions to eat field crickets.

The 48 first Stanford students also did not make any predictions about other participants. In addition, 68 Stanfordstudents participated in an exploratory treatment. These data are not included in any analysis. In the exploratorytreatments, the overwhelming majority of participants were presented with highly visceral images of insects, which madeit very difficult for them to persuade themselves that eating insects is tolerable.

38The exception are the invitation emails in Michigan, and those for the last 31 Stanford subjects informed prospectiveparticipants that the experiment involves the voluntary consumption of food items, including edible insects. This

20

Page 21: An Offer You Can’t Refuse? Incentives Change How We Think

Nonetheless, 60 subjects essentially opted out of the experiment; they rejected every offer in every

multiple decision list throughout the study. In particular, each of these participants rejected each of

10 offers to eat an insect in exchange for $60. For these participants, $30 is not a high incentive. It is

thus unsurprising that the fraction of these subjects is very similar across incentive treatments, and

statistically indistinguishable. (See Appendix B.3 for details.) Since these subjects add no interesting

variation to the data, I drop them from the analysis.39 Appendix B.4 presents robustness checks to all

main results, including a double hurdle model that explicitly models the decision of the “un-buyable”

subjects and accounts for the stochasticity in that choice. An additional six subjects refused to remove

the labels with the codes from the food containers in step 5 of the study, one participant realized in

the middle of the study that she had misunderstood the study, and another participant failed to watch

the video he had selected. I also exclude these subjects from the analysis.

Thus, the study sample contains a total of 603 participants, 234 participants in the no-video

treatment (118 and 116 with $3 and $30 incentives, respectively), and 369 in the video treatment (183

and 186 with $3 and $30 incentives, respectively).40

Randomization Check. Randomization into treatments was successful. Of 24 F -tests for differ-

ences in subjects’ predetermined characteristics across treatments, only one is significant at the 5%

level, and three more are significant at the 10% level.41 This falls within the expected range. Details

are reported in Appendix B.1.

Regression specification. In all analyses I estimate linear regression specifications. Throughout,

I cluster standard errors on the subject level, and I use university and, if appropriate, species fixed

effects. Data from the multiple price lists are interval-coded and possibly censored. For ease of

presentation, I use the interval-midpoints for analysis, and set subjects’ WTA to $60 if they rejected

all offers in a multiple price list. Appendix B.4 performs robustness checks. Accounting for interval-

coding and censoring in WTA increases the estimated effect sizes.

4.3 Analysis

Summary Statistics. Eating insects is aversive to most participants. For each of the five species,

Column 1 of Table 2 lists the fraction of subjects who have a positive WTA (before having seen the

insects). For each item the fraction of participants who would eat it for free falls short of 5%. Column

2 lists the median WTA. It is substantial, ranging from $9 to $18.75. Column 3 lists the percentage

information had no statistically measurable effect on the fraction of participants who refused to eat an insect for anyprice offered, perhaps because prospective participants knew that participation in this study would be lucrative evenfor subjects who do not consume an insect.

39The treatment decisions cannot be used for this purpose, as they were different for subjects in the high and lowincentive treatments. Amongst participants who reject every offer in every multiple decision list, 287 of 295 treatmentdecisions are rejected (97%).

40I oversampled the video condition since only that condition reveals information choice.41Arts and humanities majors are slightly underrepresented in the low incentives treatments.

21

Page 22: An Offer You Can’t Refuse? Incentives Change How We Think

Food Item Fraction WTA > $0 Median WTA Fraction WTA ≥ $60

2 house crickets 0.96 9.00 0.175 large mealworms 0.96 18.75 0.303 silkworm pupae 0.95 13.75 0.232 mole crickets 0.96 13.75 0.232 field crickets 0.95 13.75 0.22

Table 2: Summary statistics of WTA to eat each insect (before the insects were distributed). Pooledover treatment conditions. $60 is the highest price offered in the multiple price lists. Interval midpointsare used for analysis. n = 603.

of subjects who would not eat a given insect even for the highest incentive amount offered in the

multiple price lists ($60). The numbers range from 17% to 30%.42

Subjects could renege on the decision selected for implementation in exchange for $20 if they had

agreed to eat an insect. Five participants (0.8%) chose to do so.43 These participants would have

been better off never having been offered the voluntary choice to eat an insect in exchange for money.

All of them were in the high incentive condition.44

Information acquisition amplifies the effect of incentives. Panel A of Table 3 shows subjects’

decisions on the offers they were promised in the beginning of the experiment, averaged over species.

For subjects in the no video condition participation increases from 40.56% to 66.87% as incentives

are raised from $3 to $30. Subjects in the video treatment are 10.67 percentage points more likely to

accept the $30 incentive. But they are equally likely to eat an insect for $3 as those who could not

watch a video. Hence, the effect of incentives is amplified by more than a third if subjects have access

to a video, and thus can skew their information gathering and attention allocation. This confirms

prediction 1 of the model in section 3.

The effect of access to a video is concentrated in the high incentive condition. This suggests those

subjects made use of the videos to persuade themselves that eating insects is less appalling than they

would have thought had they been given the low incentive.

High incentives lead subjects to persuade themselves. The self-persuasion hypothesis is not

the only one that can explain the data in Panel A of Table 3. An alternative explanation is that the

video option affects subjects’ WTA to eat insects independently of the incentive treatment they are in.

This hypothesis can explains the data if access to the videos decreases the WTA of subjects who are

42Each decision made in step 4 of the experiment is also made as a part of a multiple price list in step 3. Thesedecisions may be inconsistent. I find this is the case for 15.15% of decisions, and 35.24% of participants reveal at leastone inconsistency. These inconsistencies are conceptually different across the two incentive conditions, however, andthus cannot be used as a control or selection criterion. Appendix B.2 presents details.

43I retain these subjects in the sample. This decision is not informative about whether or not the WTA these subjectsstated before the handing out of the insect revealed the participants’ genuine expectations; it only means that thoseexpectations may have been overly optimistic. If I do drop them, my results strengthen.

44Four of them were in the video condition, and three had opted for the pro-video.

22

Page 23: An Offer You Can’t Refuse? Incentives Change How We Think

appalled by the thought of eating insects (whose WTA is in the vicinity of $30) but not those who find

it mildly unpleasant (whose WTA is in the vicinity of $3). If so, giving subjects access to the videos

will increase participation at $30, but leave participation at $3 unchanged, even if the distribution of

WTA within the video condition is entirely unaffected by the incentive a subjects have been promised.

By studying the decisions subjects made in the multiple price lists in step 3 of the experiment, I

can disentangle the two hypotheses. In that step, all subjects make the same decisions, irrespective

of treatment condition. If the effect of the video option is independent of the treatment a subject is

in, the difference in WTA between those who could watch a video and those who could not should

not depend on the incentive condition. By contrast, if high, but not low incentives cause subjects to

persuade themselves that insects are tolerable food, the difference in WTA between those who could

watch a video and those who could not should be pronounced in the high incentive condition, but

absent in the low incentive condition.

Panel B of Table 3 shows that access to the video decreases WTA in the high incentive condition

by a significant $5.85, but has no statistically significant effect in the low incentive condition. The

difference in the effect of the video option across incentive treatments is a significant $6.64. Hence,

the data are consistent with the self-persuasion hypothesis, and inconsistent with the hypothesis that

the effect of access to a video is independent of the incentive treatment.

Panel C of Table 3 shows that the treatment effects persist beyond the distribution of the insects.

While receiving the insects on average increases subjects’ WTA by $2.33 (s.e. 0.36), it does not

significantly alter any treatment effects.

It is also noteworthy that in the no-video condition, the incentive treatment leads to a substantial

anchoring effect.45 The WTA of subjects in the no video treatment is $19.08 for subjects who are

offered the low incentive, and $24.14 for those who are offered the high incentive. Below, I show that

behavior in this experiment is not consistent with the alternative hypothesis that the video option

eliminates anchoring.

How do subjects persuade themselves? Subjects who are given higher incentives are more likely

to demand information that encourages rather than discourages eating insects, as predicted by the

model in Section 3 (prediction 2). Columns 1 and 2 of Table 4 show that the fraction of subjects

choosing the con-video drops by almost half, from 19% to 11%, as incentives raise from $3 to $30.

(The majority of subjects choose to watch the pro-video, perhaps because they already know why

they may not want to eat insects.) Subjects’ selection of video clips reinforces this finding. Subjects

in the high incentive condition select significantly fewer con-clips, and significantly more pro-clips; the

number of other-clips is unaffected (Columns 3 - 5). Incentives do not affect the total number of clips

selected, as most subjects opt for the minimum number of four clips (Column 6). In this experiment,

45Previous experiments on anchoring make it evident to subject that the anchors they use are uninformative.

23

Page 24: An Offer You Can’t Refuse? Incentives Change How We Think

A. Percentage willing to eat insects for promised amount

n = 603 Offered $3, Offered $30, Differenceaccept $3 accept $30 $30 - $3

No Video 40.56 66.87 26.30***(3.88) (3.48) (5.16)

Video 40.22 77.53 37.31***(3.27) (2.42) (4.01)

Difference Video - No Video -0.34 10.67** 11.01*(5.13) (4.31) (6.54)

B. WTA before distribution of insects, in dollars

n = 603 Low High DifferenceIncentive Incentive High - Low

No Video 19.08 24.14 5.05**(1.71) (1.91) (2.54)

Video 19.88 18.28 -1.59(1.51) (1.35) (1.99)

Difference Video - No Video 0.79 -5.85*** -6.64**(2.30) (2.37) (3.22)

C. WTA after distribution of insects, in dollars

n = 603 Low High DifferenceIncentive Incentive High - Low

No Video 21.86 26.84 4.97*(1.84) (2.02) (2.70)

Video 22.58 19.66 -2.92(1.58) (1.41) (2.09)

Difference Video - No Video 0.71 -7.18*** -7.89**(2.45) (2.50) (3.42)

Table 3: Panel A shows the fraction of participants in percent who agree to eat the food item forthe incentive amount they were promised, by treatment, and averaged over the five food items. PanelB shows mean WTA in dollars by treatment, before the insects were distributed. Panel C shows meanWTA in dollars by treatment, after the insects were distributed. Standard errors are in parentheses,clustered by subject. Coefficients are estimated using university and species fixed effects. *, ** and*** denote statistical significance at the 10%, 5% and 1% levels, respectively. Asterisks are suppressedfor levels.

24

Page 25: An Offer You Can’t Refuse? Incentives Change How We Think

incentives affect the kind of information demanded, but not the amount, possibly because of the lower

bound.

Each video contains a variety of arguments laid out over six minutes. Hence, incentives may not

only affect subjects’ explicit choice of information sources, they may also affect which parts of a given

video subjects pay attention to and believe in. The latter mechanism must be at play, since the 7

percentage points difference in choice frequencies of the pro-video is not sufficient to explain the $6.64

effect on subjects’ WTA. If the effect of the different videos is $60, the maximum that can be measured

in the multiple price lists, the 7 percentage point difference in video choice can explain only a $4.20

difference in WTA. If the effect is a more plausible $30 (the interquartile range in WTA is $30.20),

this drops to $2.10.46 The experiment in section 5 explicitly shows that incentives cause subjects to

perceive the same information differently.47

(1) (2) (3) (4) (5) (6)

6-min Video # Clips

Pro Con Pro Con Other Total

High Incentive 0.89 0.11 2.28 0.98 1.07 4.33(0.03) (0.03) (0.07) (0.07) (0.06) (0.08)

Low Incentive 0.81 0.19 2.10 1.23 1.06 4.39(0.03) (0.03) (0.07) (0.08) (0.06) (0.08)

Difference 0.07** -0.07** 0.18* -0.25** 0.01 -0.06(0.04) (0.04) (0.09) (0.11) (0.08) (0.11)

Observations 397 397 319 319 319 319R-squared 0.01 0.01 0.01 0.02 0.04 0.03

Table 4: Information choice by incentive condition. Each subject in the video condition choseexactly one 6 minute video, and at least 4 video clips. The number of observations is smaller for thevideo clips as the first 78 participants at Stanford could not choose any clips.

Are subjects aware of these effects on themselves? On average, participants are not aware

of the self-persuasion effect. This is true even though their predictions of the anchoring effect are

extremely accurate. This calls into question whether the effects of incentives on subjects’ behavior is

due to fully rational behavior.48

46The mean WTA of subjects who saw different videos differ by $10.60. This depends on an endogenous choice whichvideo to watch.

47As an alternative method to answer the same question, one can redo the estimation excluding those subjectswho opted for the con-video. This does not alter any qualitative conclusion. These estimates, however, suffer fromendogeneity bias, since subjects can choose which video to watch.

48It does not disprove the rationality hypothesis, however. See footnote 5.

25

Page 26: An Offer You Can’t Refuse? Incentives Change How We Think

To show this, I estimate the following regression model, separately for subjects in the video condi-

tion and for subjects in the no-video condition. (Recall that subjects in the video condition predicted

the WTA of other subjects in the video condition, and similarly for subjects in the no-video condition.)

WTAics = β0 + β11(incentive owni = high) + β21(incentive otherc = high) + εics (2)

Here, WTAics is subject i’s prediction of the least amount of money for which another subject in

incentive condition c is willing to eat an insect of species s. In words, I regress subjects’ predictions

on a dummy that indicates whether the prediction concerns a previous subject facing high or low

incentives, and I let the intercept vary depending on whether the subject making the predictions was

herself offered the high or the low incentives. I estimate this model using OLS with university and

species fixed effects, and cluster standard errors by subject.

Column 1 of table 5 displays the estimates of equation (2) for subjects in the no-video treatment.

It shows that these subjects predicted that other subjects in the no-video treatment would demand

an additional $4.84 to eat an insect when offered the high rather than the low incentive. This deviates

from the measured effect of $5.08 by just $0.24, or 4.7%. Column 2 shows that subjects in the video

treatment predict that the effect of incentives on other subjects in the video treatment is $5.12, and

thus very accurately predict the anchoring effect. In reality, however, that effect is countervailed by

a sizable self-persuasion effect. These two effects sum to a negative $1.59. Hence, the predictions

of subjects in the video treatment are wildly off. On average, subjects lack awareness of the self-

persuasion effect.

Even though subjects do not predict the self-persuasion effect, their predictions are affected by it.

This is corroborating evidence for the self-persuasion effect. Apparently, subjects who have both the

incentive and the opportunity to persuade themselves do so. But because they lack awareness of this

effect, they project their own lower willingness to accept onto others. Column 2 of Table 5 shows that

amongst subjects in the video condition, those who were given the high incentive make significantly

lower predictions about the least amount for which previous participants were willing to eat an insect.

For those who could not see a video, this effect is much smaller and not statistically significant.49

These averages mask some individual heterogeneity. Subjects both predict different effects of

incentives on others, and are affected differently by incentives. This heterogeneity in both beliefs and

behavior may contribute to disagreements about policies that restrict incentives.

To study this heterogeneity, I split the sample into those who predict that incentives lower WTA

(consistent with the idea that anchoring outweighs self-persuasion), and those who predict the oppo-

site. I can do so because each subject separately predicted the WTA of previous participants in the

high and low incentive conditions. These predictions cannot be an ex-post rationalization of subjects’

own behavior, since each subject was in only one treatment. Roughly one third of subjects predict

49The difference-in-differences is not significantly different from zero, however.

26

Page 27: An Offer You Can’t Refuse? Incentives Change How We Think

(1) (2) (3) (4) (5) (6)

Sample All Predict incentives Predict incentivesdecrease WTA increase WTA

No Video Video No Video Video No Video Video

Subjects’ predictionsPrediction of effect 4.84*** 5.12*** -6.80*** -6.49*** 10.64*** 11.01***

of incentives (β2) (0.75) (0.64) (0.77) (0.67) (0.69) (0.57)Effect of predictor’s -1.06 -3.38** -0.09 -5.50** -1.40 -2.12

own incentive (β1) (1.63) (1.35) (3.38) (2.59) (1.78) (1.52)

Constant (β0) 20.35*** 19.55*** 28.37*** 27.60*** 16.28*** 15.35***(1.25) (1.10) (2.45) (2.11) (1.31) (1.14)

Actual effectActual effect 5.08** -1.59 3.78 -12.08*** 5.89* 3.18

of incentives (2.53) (1.98) (4.52) (3.70) (3.07) (2.27)Subjects’ prediction -0.25 6.71*** -10.58** 5.60 4.75 7.83***

minus actual effect (2.59) (1.96) (4.61) (3.78) (3.06) (2.28)

Observations 3,510 4,958 1,170 1,603 2,340 3,355

Number of subjectsHigh incentive 116 162 38 51 78 111Low incentive 118 159 40 57 78 102

Percentage of subjectsHigh incentive 100 100 32.8 27.4 67.2 72.6Low incentive 100 100 33.9 31.1 66.1 68.9

Table 5: The first 78 participants at Stanford did not predict others’ WTA, and are therefore notincluded in the regressions in this table. Estimated using university and species fixed effects. *, **and *** denote statistical significance at the 10%, 5% and 1% levels, respectively.

that higher incentives decrease WTA, and this fraction does not substantively differ across the four

treatment cells, as reported in the two bottom rows of Table 5 (p = 0.87, F -test for joint significance).

Columns 3 and 4 of Table 5 report the effect of incentives on the WTA of those who predict that self-

persuasion overweighs, columns 5 and 6 report the effect for those who make the opposite predictions.

Amongst those subjects who predict that higher incentives decrease WTA, they in fact do so, by a

highly significant $12.08—but only for the 101 of them who were in the video-condition. Amongst

the remaining subjects, higher incentives increase WTA, but do so significantly (at the 10% level)

only for those who could not watch a video.50 Hence, subjects are partially aware of how incentives

50It is not possible to compare the estimates of subjects’ predictions of the effect of incentives to the actual effecton the subsamples in Columns (3) - (6) without additional statistical corrections. This is because those subjects areselected on their prediction of the effect, but not on the associated behavior. Hence, the OLS estimates of subjects’mean predictions within those subsamples are biased.

27

Page 28: An Offer You Can’t Refuse? Incentives Change How We Think

affect themselves, even though they fail to appropriately account for situational factor such as access

to information.

Self-Persuasion or “Information Weakens Anchoring”? While incentives decrease WTA in

the video condition relative to behavior in the no-video condition, the effect in the video condition

alone is not statistically different from zero. Hence, an alternative to the self-persuasion hypothesis is

that information eliminates anchoring.

This alternative falls short of explaining many aspects of the data.51 First, it cannot explain why

subjects offered the $30 incentive are more likely to accept that offer when they are in the video

condition. To explain the change in participation at $30, the $30 anchor would have to increase the

WTA of some subjects from a value below $30 to a value above $30. The anchoring hypothesis,

however, merely says that valuations will be drawn towards an anchor, not that they will overshoot.

Second, the alternative hypothesis can only explain why giving subjects the video option might

attenuate a positive effect of the incentive on WTA; it cannot explain why higher incentives would

lead to lower WTA. Therefore, it fails to explain why WTA falls as incentives increase for those who

predict that higher incentives decrease WTA (Columns 3 and 4 of Table 5).

Third, the self-persuasion hypothesis naturally explains why subjects in the video treatment make

lower predictions about the least amount for which others would eat an insect when they themselves

are given high rather than low incentives. The alternative hypothesis cannot easily account for this.

Discussion. This experiment provides evidence for the kind of concerns that proponents of limits

on incentives have raised.

It shows that incentives can affect subjects’ expectations about how unpleasant they will find an

experience that only affects themselves. Subjects given both the incentive and the opportunity to

persuade themselves that eating an insect is not that unpleasant choose to do so. Therefore, endoge-

nous information acquisition amplifies the effect of incentives, as predicted in Section 3 (prediction

1). Subjects achieve this partly by demanding more information that encourages participation, and

less information that discourages participation when incentives are high, confirming prediction 2 of

Section 3. Nonetheless, they fail to predict the effect of incentives in others, which suggests that

they are not aware of it. This calls into question whether behavior in this experiment is a result of

Bayes-rational behavior.

This experiment leaves two open questions, which I address in the next experiment. First, incen-

tives can increase ex post harm only if they increase participation by individuals who subsequently

regret this decision (an increase in false positives). If they instead cause individuals to participate who

would have ex post been pleased with participation even for a lower incentive amount, this increases

51In light of previous literature, this is not surprising. The participants in Ariely et al. (2003) were all given a sampleof the aversive stimulus (obnoxious noise) before they were subjected to the anchor and revealed their WTA to listento more of the same noise, and hence had arguably complete information about that experience. Substantial anchoringeffects were nonetheless present.

28

Page 29: An Offer You Can’t Refuse? Incentives Change How We Think

welfare (a decrease in false negatives). The insect experiment cannot distinguish between these mech-

anisms, as it does not contain a measure of subjects’ actual experience of eating the insects. The

next experiment uses the induced preference paradigm (Smith, 1976). It thus allows me to separately

measure the effect of incentives on false positives and false negatives.

Second, the experiment allows me to test explicitly whether an increase in optimism due to higher

incentives is consistent with Bayesian rationality, and thus corroborate the suggestive evidence on

deviations from Bayesian behavior obtained in the insect experiment.

5 Experiment: A Losing Gamble

In this experiment I directly address the concern by proponents of limits on incentives that incentives

affect how individuals perceive the (likelihood of) upsides and downsides of a transaction. I explicitly

show that incentives cause subjects to perceive the same information differently. I also show that

incentives make subjects more optimistic in a way that is inconsistent with Bayesian rationality.52

This is relevant because only non-Bayesian decision makers can possibly be made worse off by higher

incentives.

This experiment complements the previous one in three ways. First, I induce subjects’ objective

function. I can thus separately study the effect of incentives on false negatives and false positives.

The distinction matters since an offer to participate in exchange for money can make an individual (ex

post) worse off only if the individual participates and subsequently regrets (false positive), but not if

the individual rejects, even by mistake (false negative). Second, this experiment excludes preference-

based explanations of the effects I document. Treatments therefore affect subjects’ beliefs about

outcomes, without altering those outcomes per se. In the insects experiment, by contrast, treatments

could possibly affect subjects not only by affecting their expectations concerning the experience of

eating an insect, but also by altering that experience itself. Third, I replicate the main finding from

the insect experiment in a different environment with a different population, and thus demonstrate

its robustness.

5.1 Design

The decision environment closely follows the model in section 3. I incentivize participants to risk

losing a money amount πB > 0, or nothing (πG = 0), each with prior probability µ = 0.5. The

participant can freely decide whether or not to take this gamble in exchange for an incentive amount

52The criterion derived in Section 3 (proposition 2) is an implication of the law of iterated expectations and thusdemands two prerequisites. First, subjects cannot have systematically biased priors. Second, the criterion is applicablewhen the entire distribution of posterior beliefs is observed. This, in turn, requires studying a sample in which the fulldistribution of signal realizations is observed. (In the insect experiment, by contrast, behavior might potentially drivenby specifics of the particular videos I used.) This experiment meets these criteria as it induces priors, and draws signalrealizations separately and independently over subjects and decisions.

29

Page 30: An Offer You Can’t Refuse? Incentives Change How We Think

m, with πB > m > 0. Hence, taking the lottery either leads to a net loss (if the state is bad) or to a

net gain (if the state is good). A participant who does not take the lottery neither gains nor loses.

Structure. The experiment follows a 2× 2 design. The first dimension varies the incentive amount

for participating in the gamble. The second dimension varies whether the subject knows the incentive

amount she will be offered at the point she studies information about the consequences of accepting

the gamble. Her attention allocation can respond to the incentive only if she knows the incentive.

The structure of this experiment thus resembles the previous one. I vary treatment conditions within

subjects; the experiment thus proceeds in multiple rounds, each of which follows the same steps.

Timeline. First, the participant either learns the incentive amount m he will be offered in that round

(the before condition), or that he will learn that incentive amount in step 3 (the after condition).

Second, he observes information on the consequences of participation. In principle, it perfectly

reveals whether or not taking the gamble will lead to a loss, but only at a considerable attentional

cost. Specifically, the participant is shown a picture consisting of 450 randomly ordered letters such

as in Panel A of Figure 2. If the state is good (taking the lottery does not lead to a loss), the picture

contains 50 letters G and 40 letters B (for “good” and “bad”, respectively). If the state is bad (taking

the lottery leads to a loss), these numbers are reversed. The participant can examine that picture in

any way and as long as he likes. Implicitly, by choosing how to do this, he chooses kind and amount

of information. For instance, a subject can count all letters to learn with certainty whether the state

is good or bad, or she can obtain a noisier signal by focusing on a part of the picture (Caplin and

Dean (2013a) and Caplin and Dean (2013b) use a similar methodology).53

Third, the subject learns (in the after-condition) or is reminded (in the before-condition) of the

incentive for taking the gamble, and decides whether or not to participate. This decision is made in a

new screen, and subjects cannot return to the screen with the picture. This prevents subjects in the

after-condition from skewing their attention allocation.54

Up to this point, the experiment lets me study the effect of incentives and skewed information

demand on false positive and false negative rates. The fourth step measures (a monotonic function

of) posterior beliefs about the state in the current round, and hence allows me to test for Bayesian

rationality. Specifically, the subject decides how much money to invest in a project. The agent loses

her money if the state in the current round is bad. If it is good, she wins back the money, plus a

decreasing return. Hence, the more confident she is that the state is good, the larger the investment

amount she optimally chooses. The payoffs are the sum of those from a quadratic scoring rule (e.g.

53If, for instance, she counts letters until she either observed one more letter ‘G’ than ‘B’ or until she observed fivemore letters ‘B’ than ‘G’, the probability she mistakenly takes the lottery is high, and the probability she mistakenlyrejects the lottery is low.

54In principle, subjects could take screenshots. The fact that I find significant treatment effects shows that themajority of subjects did not engage in such behavior. To the extent that they did, my results underestimate the effectof endogenous attention allocation.

30

Page 31: An Offer You Can’t Refuse? Incentives Change How We Think

Selten (1998)) and a state-contingent payment that is independent of the subject’s choice.55 The

investment opportunity is presented as in Panel B of Figure 2. A subject who thinks about her degree

of confidence may conclude that her choice in step 3 was not optimal. Therefore the subject can

return to the previous step and alter her choice at any point while making her investment decision.

Payment. Participants are paid for one randomly selected decision of one randomly selected round.

Hence, they have an incentive to reveal their genuine preferences in each decision. To incentivize them

to scrutinize the pictures, the decision promised in the beginning of the round is selected with 80%

probability. With the remaining 20% probability, the investment decision is selected.

5.2 Implementation and preliminary analysis

I conducted this experiment on the Amazon Mechanical Turk online labor platform with a total of

450 participants in March and October 2015, coded in Qualtrics.56 Instructions are presented on

screen. Before participants can proceed, they have to correctly mark each of 10 statements about the

instructions as true or false.57

A participant who takes a losing bet suffers a loss of πG = $3.50. The incentive amounts m are

$0.50 and $3. Hence, the gamble is either win $3 / lose $0.50, or win $0.50 / lose $3, and is presented

in this way. Losses are discounted from a completion payment of $6, gains are added. By comparison,

laborers on Amazon Mechanical Turk typically earn an hourly wage around $5 (Horton, Rand and

Zeckhauser (2011), Mason and Suri (2012)).

Each participant completes all four treatments in individually randomized order.58 Subjects learn

at the beginning of the experiment that in some rounds they may not know the incentive amount

when examining the picture. They are also told that they will be given different lotteries in different

rounds, but no additional detail. For each participant and each round, states of the world are drawn

independently, and a picture with scrambled letters is generated individually.59

55The possible investment amounts increase in units of $0.15 from $0 to $2.10. The vector of associated returns is$0.00, 1.19, 1.60, 1.87, 2.08, 2.25, 2.39, 2.50, 2.59, 2.67, 2.74, 2.80, 2.85, 2.89, 2.92. The fact that the optimal choicedepends on risk preferences is immaterial to this analysis. Even if subjects are not risk neutral, their optimal choice ismonotonic in posterior beliefs. This is sufficient to apply the criterion of section 3 to test for violations of the law ofiterated expectations.

56The results from the 155 subjects who participated in March are both qualitatively and quantitatively very similarto those reported here, and statistically significant. I obtained the additional observations as a replication check.

57In case of a mistake, the computer only lets the subject know that at least one of the statements is marked incorrectly.Hence, it is extremely unlikely that participants complete this task by chance.

58In addition, each subject participates in two additional rounds, in one of which the bet is win $3 / lose $3, and onein which it is win $0.5 / lose $0.5. In both of these, the amounts are known from the start. Multiplying both the upsideand the downside of the bet by the same amount leads to a more pronounced S-shape of the investment choices subjectsmade in step 4 of the round. This is consistent with rational inattention theory, which predicts that such a change leadsto a second order stochastic dominance increase in posterior beliefs. The 155 participants who participated in Marchcompleted an additional two rounds.

59Participants cannot use a text editor to automatically count the letters since they are presented in a picture format(HTML5 Canvas).

31

Page 32: An Offer You Can’t Refuse? Incentives Change How We Think

(A) (B)

Figure 2: Panel (A): Presentation of state and information in the implicit information choice. Panel(B): Presentation of investment decision.

Preliminary analysis. On average, subjects take 33 minutes to complete the study. Subjects pay

attention to the pictures; they are able to discern the state with a significantly higher likelihood than

chance. Averaged over all treatments, subjects decide to bet 31.33 percent of the time if the state is

bad, and a significantly higher 62.88 percent of the time if it is good.60

18.06% of participants make use of the option to revise their decision about the bet after deciding

about the continuous investment in at least one round. They do so infrequently; in total, only 1.13%

of decisions are changed.61

5.3 Analysis

Endogenous attention allocation amplifies the effect of incentives. Panel A of Table 6 shows

subjects’ decisions of whether or not to take the losing gamble by treatment condition. Incentives

increase participation even when subjects examine the picture without knowing whether they will be

offered the large or the small incentive. In this case, the participation rate increases from 23.16%

to 67.80% as the incentive amount is raised from $0.50 to $3. Crucially, the increase is a significant

8.36 percentage points larger when subjects can skew their attention allocation in response to the

incentive offered. This confirms the prediction of the theory (prediction 1 in section 3), and replicates

the respective result of the insect experiment. Moreover, as with that experiment, this difference in

the effectiveness of incentives is almost entirely caused by behavior in the high incentive condition.

60The time spent examining a picture is right skewed, with a mean of 46 seconds per picture, and a median of 22seconds.

61Data on reversals of betting decisions are only available for the 155 subjects who participated in March, due to acoding error. The stated numbers are the frequencies of revision within this subsample.

32

Page 33: An Offer You Can’t Refuse? Incentives Change How We Think

Overall, this shows that subjects who know that they will decide whether to take the gamble in

exchange for a high incentive literally perceive the same information differently than those who make

the exactly same decision but do not know that when examining the picture.

Being able to skew attention allocation mainly increases false positives. The offer to take

the gamble in exchange for payment can only make the subject (ex post) worse off if she accepts and

then loses, but not if she fails to accept even though she would have won. I therefore separately study

how the treatments influence the false positive and false negative rates.

Panel B of Table 6 shows the ability to skew attention allocation according to the incentive increases

participation in the high incentive condition almost entirely due to an increase in false positives. If

the state is bad and skewing is not possible, 48.29% of participants take the lottery. If information

demand can be skewed, a highly significant additional 12.98 percentage of participants take the same

gamble even though they will lose.

By contrast, the ability to skew the information demand has no statistically significant effect on

the rate of false negatives, as Panel C shows. When the state is good, about the same fraction of

participants recognize this, regardless of whether or not they can skew their attention allocation.

This increase in false positives is particularly large compared to subjects’ ability to discriminate

between the states. Subjects who are offered the high incentive but do not know this when examining

the picture are 40.60 percentage points (s.e. 3.91) more likely to take the bet when the state is good

than when it is bad (88.89% and 48.29, respectively). The 12.98 percentage point increase in the false

positive rate from being able to skew the information demand is almost a third of this discrimination

ability (31.98%).

Incentives increase optimism in a non-Bayesian fashion. The ability to skew attention in

response to incentives increased subjects’ willingness to take the gamble in exchange for high incentives.

To test whether this response is consistent with Bayesian rationality, or whether it reflects a non-

Bayesian increase in optimism, I analyze subjects decisions of how much to invest into the decreasing

returns technology.62, 63 Figure 3 displays the cumulative distribution functions of the invested amount

by treatment condition. (These CDF pool over the good and bad state, so that the law of iterated

expectations can be tested for.)

Even for Bayes-rational decision makers the CDF may differ across treatment conditions, due to

differences in attention allocation. But for Bayesian decision makers, any pair of CDF must cross

at least once. If it does not, there is a first order stochastic dominance relationship in a monotonic

62In principle, the approach of eliciting subjects’ posteriors separately from their betting decision leaves open thepossibility that some posteriors are not instrumental in the sense that they could be changed without affecting the thebetting decision. Such posteriors, however, are incompatible with optimization. If a subject acquires a posterior beliefthat could be altered without altering the betting decision with positive probability, then the subject would have beenable to acquire a less informative posterior at a lower informational cost without affecting her choices.

63Caplin and Dean (2015) provide an alternative measure of rationality that does not rely on separate elicitation ofbeliefs data. Appendix C presents this analysis.

33

Page 34: An Offer You Can’t Refuse? Incentives Change How We Think

Percentage of subjects willing to take the lottery

Incentive Incentive Difference$0.50 $3 High - Low

A. Both states

Learn incentiveafter examining picture 23.16 67.80 44.64***

(cannot affect attention) (1.99) (2.21) (3.20)before examining picture 20.83 76.17 55.33***

(can affect attention) (1.92) (2.61) (3.05)

Difference before - after -2.33 8.36*** 10.70***(2.29) (2.80) (3.69)

B. Bad state only (false positives)

Learn incentiveafter examining picture 10.36 48.29 37.93 ***

(cannot affect attention) (2.05) (3.28) (3.90)before examining picture 7.14 61.27 54.13***

(can affect attention) (1.73) (3.42) (3.86)

Difference before - after -3.21 12.98*** 16.20***(2.57) (4.33) (5.14)

C. Good state only (correct positives)

Learn incentiveafter examining picture 35.96 88.89 52.93***

(cannot affect attention) (3.19) (2.14) (3.92)before examining picture 34.51 91.06 56.54***

(can affect attention) (3.17) (1.82) (3.72)

Difference before - after -1.45 2.17 3.62(3.88) (2.69) (4.89)

Table 6: Panel A shows participation rates pooled over states. Exactly half the total weight is givento observations in which the state is good, and half to those in which the state is bad. Panels B andC separately show participation rates in the bad and good states, respectively. *, ** and *** denotestatistical significance at the 10%, 5% and 1% levels, respectively. Asterisks are suppressed for levels.

function of posterior beliefs, which contradicts the law of iterated expectations, as shown in section

3.64

The figure suggests that higher incentives systematically increase optimism in a way that is not

consistent with Bayes-rationality, and so does the ability to allocate attention in response to incentives

64The criterion precludes f.o.s.d. relations in strictly monotonic functions of posterior beliefs. Subjects can only investamounts that are a multiple of $0.15. This decision is thus merely weakly monotonic in posterior beliefs. Because thestep size is small, I nonetheless argue that a f.o.s.d. relation indicates a violation of Bayesian rationality.

34

Page 35: An Offer You Can’t Refuse? Incentives Change How We Think

0.2

.4.6

.81

CD

F of

Inve

sted

Am

ount

0 .5 1 1.5 2Invested Amount

Low Incentive, known beforeHigh Incentive, known beforeLow Incentive, known afterHigh Incentive, known after

Figure 3: Cumulative distribution functions of invested amounts for each treatment. Bayesianrationality precludes first order stochastic dominance amongst any pair of these functions.

when the incentive is high. Specifically, invested amounts are lowest for either of the low incentive

conditions, highest for the high incentive condition when attention allocation can be skewed, and

intermediate when no such skewing is possible (in the f.o.s.d. order). Each of these pairwise compar-

isons of mean invested amounts is significant at the 5% percent level. The difference in differences is

significant at the 10% level.

Because a difference in means is not sufficient for a first order dominance relation, I also calculate

Davidson and Duclos (2000) statistics and use the approach as in Tse and Zhang (2004) to explicitly

test for such a relation.65 I find that the distribution of invested amount is significantly smaller in

the first order at the 5% level in either low-incentive treatment than in the high incentive treatment

in which subjects cannot skew their information demand. Additionally, within the high incentive

condition the distribution of invested amounts is significantly larger (in the first order) when subjects

can skew their attention allocation than when they cannot. The latter result suggests that the effect

of incentives on beliefs is not merely due to wishful thinking, but is related to subjects’ information

demand.66

Discussion. This experiment provides further directional evidence for the kind of concerns that

proponents of limits on incentives have raised.

65I bootstrap the distribution of the test statistic using 1000 bootstrap samples for each comparison.66There are two caveats to these results. First, subjects made the investment choice after deciding whether or not to

take the bet, and hence, their invested amounts could still be influenced by mechanisms such as ex-post rationalization.This seems unlikely to the extent that subjects had the opportunity to go back and change their decision on the bet whendeciding about the investment amount at a very low cost. Second, subjects’ decisions could be influenced by anchoring.Anchoring, however, only plausibly explain the difference in invested amounts across the incentive treatments, not thedifference across information conditions within the high incentive condition. The experiment reported in appendix Daddresses these concerns, and finds qualitatively unchanged results.

35

Page 36: An Offer You Can’t Refuse? Incentives Change How We Think

In this experiment, giving subjects the opportunity to skew their attention allocation in response

to incentives substantially increases the false positive rate, and thus ex post regret, when the incentive

amount is high. This shows that incentives cause subjects to perceive the same information differently.

This is reminiscent of the concern that incentives distort subjects’ assessment of the costs and benefits

of participation.

The experiment shows that this response cannot fully be explained by Bayes rational behavior.

Higher incentives make subjects systematically more optimistic in a way that violates the law of

iterated expectations, and so does the opportunity to skew attention allocation when incentives are

high.

These findings are broadly consistent with the model in section 3, in particular when non-Bayesian

belief updating is accounted for. They also replicate the main findings obtained in the insect experi-

ment in a different setting with a different subject population, and thus corroborate these results.

6 Policy implications

This paper has several policy implications. They are necessarily qualitative; an important task for

future research is to quantify the magnitude of the mechanism I document within the markets specific

policies target.

Informed Consent. Comprehensive informed consent requirements are an obvious means to over-

come issues caused by endogenous information acquisition. Current informed consent policies are

unlikely to achieve such an objective, for three reasons.

First, for many transactions there is a dearth of information regarding possible consequences. For

instance, there exists no comprehensive registry of previous egg donors (Bodri, 2013). Hence, any in-

formed consent policy on egg donation is necessarily incomplete, even for issues as centrally important

as the effect of the transaction on one’s own fertility and on the likelihood of reproductive cancers.

The more prospective participants attempt to fill this void by seeking out stories and anecdotes, the

more their information gathering may be affected by the mechanisms identified in this paper.

Second, when information is available, it is often difficult to interpret. For instance, of two recent

studies on the long term effects of kidney donation, one estimates that donors are 5 percentage points

more likely to be dead 20 years after donation than comparable non-donors (Mjøen, Hallan, Hartmann,

Foss, Midtvedt, Øyen, Reisæter, Pfeffer, Jenssen and Leivestad, 2014), whereas the other estimates

they are 2 percentage points less likely to be dead after that time (Ibrahim, Foley, Tan, Rogers, Bailey,

Guo, Gross and Matas, 2009). This difference has an abundance of explanations; a decision maker

will have to weigh between them, and between the different conclusions drawn by the two studies.

Again, this opens the door for the mechanisms identified in this paper.

36

Page 37: An Offer You Can’t Refuse? Incentives Change How We Think

Third, any informed consent policy is largely limited to objective information. By contrast, the

decision maker is concerned with the implications of participation on his subjective well being. For

instance, how much quality of life he would lose from fatigue that may arise as a side effect of

an organ donation (Tellioglu, Berber, Yatkin, Yigit, Ozgezer, Gulle, Isitmangil, Caliskan and Titiz

(2008), Beavers, Sandler, Fair, Johnson and Shrestha (2001)) will always be up to the participant to

determine. Information acquisition about subjective consequences may likewise be influenced by the

mechanisms identified here.

If an informed consent policy aims to address the concerns raised by proponents of limits on

incentives by curtailing the mechanisms identified in this paper, therefore, several steps are necessary.

First, the likelihood of relevant outcomes must be researched. Second, objective information should

be presented in an easily accessible and unequivocal manner. Third, such an informed consent policy

may place more emphasis on ensuring that participants understand the possible consequences of a

transaction on subjective well-being.

Insurance. Insuring participants against adverse outcomes is a frequently discussed policy in mar-

kets for which incentives are capped, for instance in kidney donation (Rosenberg (2015a), Open Letter

To President Obama (2014)).

This paper points to a side effect of such a policy. Insurance may increase the fraction of partic-

ipants who participate but later regret having done so. The reason is that insurance decreases the

cost of false positives. If information acquisition is costly, the rate of false positive errors may thus

increase. In any concrete policy application, this effect needs to be weighed against the increase in

welfare for those who would otherwise suffer unabated consequences.

“Forbid it entirely or do not restrict it.” Some commentators criticize limits on incentives by

the following argument. They maintain that either there is a legitimate paternalistic concern to restrict

a certain activity, or that there is not. They conclude that in the first case, the activity should be

prohibited entirely, and in the second there is no reason to restrict incentives (Lewin (2015), Emanuel

(2004)).67 This paper shows that there may be a middle ground. Since high incentives themselves can

be a reason for paternalistic concerns (for instance if they make subjects overly optimistic), merely

restricting them may be superior to both outlawing the activity, and permitting it unconditionally.

Bait-and-switch. Finally, my findings explain why marketing techniques reminiscent of bait-and-

switch appear to be effective. Prospective military recruits, for instance, are told at the beginning

of the recruitment interview that they may be eligible for a signup bonus of up to several tens of

thousands of dollars in cash and subsidies for college tuition. They then proceed through the entire

67For instance, the president of Barnard college, Debora L. Spar, states “Our whole system makes no sense. ... [W]eshould either say, ‘Egg-selling is bad and we forbid it,’ as some countries do, or ‘Egg-selling is O.K., and the horse isout of the barn, but we’re going to regulate the market for safety’,” cited in Lewin (2015).

37

Page 38: An Offer You Can’t Refuse? Incentives Change How We Think

recruitment interview and take a battery of tests. Only just before signing the contract do recruits

learn the actual bonus they are eligible for; for most it is far lower than the maximum that got

their attention (Cave (2005), McCormick (2007)). My results suggests that this technique is effective

because the prospect of the high bonus causes candidates to favorably interpret the information given

in the recruiting interview, and thus increases their willingness to enroll.68

7 Conclusion

Around the world there are laws that limit incentives for transactions such as living organ donation,

surrogate motherhood, human egg donation, and clinical trial participation, amongst others. While

these laws do not intend to discourage these activities – altruistic participation is often applauded –

they have been held responsible for shortages of goods such as donor kidneys.

There is an ongoing public discourse on whether these laws should be changed. Proponents of these

laws maintain that they protect people from harm, since incentives might distort decision making.

Opponents have sometimes been puzzled by these arguments. Some have dismissed the proponents’

concerns, and have attributed them to an insufficient of understanding of basic economics.

In this paper, I have used standard economic tools to understand the concerns voiced by proponents

of limits on incentives, and I have provided directional evidence for these mechanisms.

I have first developed a conceptual framework in which incentives affect how prospective partici-

pants optimally inform themselves about the possible consequences of a transaction, and thus affect

their expectations. It predicts that incentive change the ex post distribution of outcomes, by increas-

ing rates of (ex post) mistaken participation and decreasing rates of (ex post) mistaken abstention.

While this mechanism cannot ex ante hurt a Bayes-rational decision maker, it is relevant both because

it may affect the political feasibility of incentive programs, and because concerns about ex post out-

comes are prevalent regardless of whether the actions that led to them were undertaken voluntarily.

By contrast, overly optimistic decision makers can be made ex ante worse off by incentives.

Second, I have conducted an experiment in which subjects are promised either a high or a low

incentive for eating whole insects, and either can or cannot inform themselves about insects as food

before making a decision. This experiment shows that higher incentives can make individuals more

optimistic about how unpleasant they will find the experience of ingesting the bugs. The fact that

subjects are unable to predict this effect in others suggests that they are unaware of it, and thus calls

into question whether it is a consequence of purely rational behavior.

The third part of this paper presents a complementary experiment. It shows that incentives

cause people to perceive literally the same information differently. In that experiment, subjects who

can alter their attention allocation in response to incentives are substantially more likely to commit

68Similarly spam from alleged kidney buyers frequently cites compensation amounts of several hundreds of thousandsof dollars. Actual prices paid to sellers are estimated to be no more than several tens of thousands of dollars Havocscope(2015).

38

Page 39: An Offer You Can’t Refuse? Incentives Change How We Think

false positive errors than subjects who cannot when both face a high incentive for participation.

False negative rates, however, are not significantly affected. That experiment also shows that higher

incentives make subjects systematically more optimistic in a way that is inconsistent with Bayesian

rationality.

Overall, my results show that a central concern raised by policy makers and ethicists about the

effect of incentives can be understood using standard tools of economic analysis. My laboratory

experiments provide directional empirical evidence for this effect. My work thus helps bridge a gap

between disciplines. In further research, it is important to quantify the magnitude of these effects

within the domains to which the laws apply, such as human egg donation, or living organ donation.

More broadly, my research contributes to an emerging literature that aims to understand the

motivations behind paternalistic concerns. Such an understanding is crucial for an informed policy

debate.

39

Page 40: An Offer You Can’t Refuse? Incentives Change How We Think

References

Alabama High School Athletic Association, “Handbook,” Montgomery, AL July 2015.

Alevy, Jonathan E, Craig E Landry, and John A List, “Field experiments on anchoring of

economic valuations,” Available at SSRN 1824400, 2010.

Ambuehl, Sandro and Shengwu Li, “Belief Updating and the Demand for Information,” unpub-

lished manuscript, Stanford University, 2015.

, B Douglas Bernheim, and Annamaria Lusardi, “Financial Education, Financial Compe-

tence, and Consumer Welfare,” NBER working paper 20618, 2014.

, Muriel Niederle, and Alvin E. Roth, “More Money, More Problems? Can High Pay be

Coercive and Repugnant?,” American Economic Review, Papers & Proceedings, 2015, 105 (5).

Andreoni, James, Deniz Aydin, Blake Barton, B. Douglas Bernheim, and Jeffrey

Naecker, “When Fair Isn’t Fair: Sophisticated Time Inconsistency in Social Preferences,” un-

published manuscript, Stanford University, 2015.

Ariely, Dan, George Loewenstein, and Drazen Prelec, ““Coherent Arbitrariness”: Stable

Demand Curves Without Stable Preferences,” The Quarterly Journal of Economics, 2003, 118 (1),

73–105.

Babcock, Linda and George Loewenstein, “Explaining bargaining impasse: The role of self-

serving biases,” The Journal of Economic Perspectives, 1997, pp. 109–126.

Bartling, Bjorn, Roberto A Weber, and Lan Yao, “Do Markets Erode Social Responsibility?,”

The Quarterly Journal of Economics, 2015, 130 (1), 219–266.

Basu, Kaushik, “The economics and law of sexual harassment in the workplace,” The Journal of

Economic Perspectives, 2003, 17 (3), 141–157.

, “Coercion, Contract and the Limits of the Market,” Social Choice and Welfare, 2007, 29 (4),

559–579.

Beavers, Kimberly L, Robert S Sandler, Jeffrey H Fair, Mark W Johnson, and Roshan

Shrestha, “The living donor experience: donor health assessment and outcomes after living donor

liver transplantation,” Liver transplantation, 2001, 7 (11), 943–947.

Becker, Gary S and Julio J Elias, “Introducing incentives in the market for live and cadaveric

organ donations,” The Journal of Economic Perspectives, 2007, pp. 3–24.

Beggs, Alan and Kathryn Graddy, “Anchoring effects: Evidence from art auctions,” The Amer-

ican Economic Review, 2009, pp. 1027–1039.

40

Page 41: An Offer You Can’t Refuse? Incentives Change How We Think

Benabou, Roland and Jean Tirole, “Self-confidence and personal motivation,” Quarterly Journal

of Economics, 2002, pp. 871–915.

Benoıt, Jean-Pierre and Juan Dubra, “Apparent overconfidence,” Econometrica, 2011, 79 (5),

1591–1625.

Bergman, Oscar, Tore Ellingsen, Magnus Johannesson, and Cicek Svensson, “Anchoring

and cognitive ability,” Economics Letters, 2010, 107 (1), 66–68.

Bernheim, B. Douglas, “Behavioral Welfare Economics,” Journal of the European Economic As-

sociation, 2009, 7 (2-3), 267–319.

and Antonio Rangel, “Beyond Revealed Preference: Choice-Theoretic Foundations for Behav-

ioral Welfare Economics,” The Quarterly Journal of Economics, 2009, 124 (1), 51–104.

, Andrei Fradkin, and Igor Popov, “The Welfare Economics of Default Options in 401(k)

Plans,” American Economic Review, 2015, 105 (9), 2798–2837.

Blackwell, David, “Equivalent comparisons of experiments,” The Annals of Mathematical Statistics,

1953, 24 (2), 265–272.

Bodri, Daniel, “Risk and complications associated with egg donation,” in “Principles of Oocyte and

Embryo Donation,” Springer, 2013, pp. 205–219.

Bogacz, Rafal, Eric Brown, Jeff Moehlis, Philip Holmes, and Jonathan D Cohen, “The

physics of optimal decision making: a formal analysis of models of performance in two-alternative

forced-choice tasks.,” Psychological Review, 2006, 113 (4), 700.

Brenner, Lyle A, Derek J Koehler, and Amos Tversky, “On the evaluation of one-sided

evidence,” Journal of Behavioral Decision Making, 1996, 9 (1), 59–70.

Brunnermeier, Markus K. and Jonathan A. Parker, “Optimal Expectations,” American Eco-

nomic Review, 2005, 95 (4), 1092–1118.

Bundesrepublik Deutschland, “Gesetz zum Schutz von Embryonen §1 Missbrauchliche Anwen-

dung von Fortpflanzungstechniken,” 1990.

Caplin, Andrew, “Measuring and Modeling Attention,” Annual Reviews of Economics, forthcoming.

and Daniel Martin, “A testable theory of imperfect perception,” The Economic Journal, 2014.

and John Leahy, “Psychological expected utility theory and anticipatory feelings,” Quarterly

Journal of Economics, 2001, pp. 55–79.

and Mark Dean, “Rational inattention and state dependent stochastic choice,” unpublished

manuscript, New York University, 2013.

41

Page 42: An Offer You Can’t Refuse? Incentives Change How We Think

and , “Rational Inattention, Entropy, and Choice: The Posterior-Based Approach,” unpublished

manuscript, New York University, 2013.

and , “Revealed Preference, Rational Inattention, and Costly Information Acquisition,” Amer-

ican Economic Review, 2015, 105 (7), 2183–2203.

Cave, Damien, “Critics Say It’s Time to Overhaul The Army’s Bonus System,” New York Times,

August 15 2005.

Center for Bioethics and Culture, “Eggsploitation,” www.eggsploitation.com 2011.

Choi, Stephen J, Mitu Gulati, and Eric A Posner, “Altruism Exchanges and the Kidney

Shortage,” Law & Contemp. Probs., 2014, 77, 289.

Clark, Alan C., “The Challenge of Proselytism and the Calling to Common Witness,” Ecumenical

Review, 1996, 48 (2), 212–221.

Council of Europe, “Medically assisted procreation and the protection of the human embryo. Com-

parative study on the situation in 39 states.,” Strasbourg June 1998.

Cryder, Cynthia E., Alex John London, Kevin G. Volpp, and George Loewenstein, “In-

formative inducement: Study payment as a signal of risk,” Social Science & Medicine, 2010, 70,

455–464.

Davidson, Russell and Jean-Yves Duclos, “Statistical inference for stochastic dominance and

for the measurement of poverty and inequality,” Econometrica, 2000, pp. 1435–1464.

DiTella, Rafael, Ricardo Perez-Truglia, Andres Babino, and Mariano Sigman, “Conve-

niently Upset: Avoiding Altruism by Distorting Beliefs about Others’ Altruism,” American Eco-

nomic Review, 2015, 105 (11), 3416–3442.

Eil, David and Justin M. Rao, “The Good News-Bad News Effect: Asymmetric Processing of

Objective Information about Yourself,” American Economic Journal: Microeconomics, 2011, 3,

114–138.

El-Gamal, Mahmoud A. and David M. Grether, “Are People Bayesian? Uncovering Behavioral

Strategies,” Journal of the American Statistical Association, 1995, 90 (432).

Elias, Julio J, Nicola Lacetera, and Mario Macis, “Markets and Morals: An Experimental

Survey Study,” PloS one, 2015, 10 (6).

Elias, Julio, Nicola Lacetera, and Mario Macis, “Sacred Values? The Effect of Information on

Attitudes Toward Payments for Human Organs,” American Economic Review, Papers & Proceed-

ings, 2015, 105 (5).

42

Page 43: An Offer You Can’t Refuse? Incentives Change How We Think

Emanuel, Ezekiel J, “Ending concerns about undue inducement,” The Journal of Law, Medicine

& Ethics, 2004, 32 (1), 100–105.

, “Undue inducement: Nonsense on stilts?,” The American Journal of Bioethics, 2005, 5 (5), 9–13.

Engel, Christoph and Peter G Moffatt, “dhreg, xtdhreg, and bootdhreg: Commands to imple-

ment double-hurdle regression,” Stata Journal, 2014, 14 (4), 778–797.

Epstein, Richard, “The Economics of Organ Donations: EconTalk Transcript,” http://www.

econlib.org/library/Columns/y2006/Epsteinkidneys.html 2006.

Ethics Committee of the American Society for Reproductive Medicine, “Financial com-

pensation of oocyte donors,” Fertility and Sterility, 2007, 88 (2), 305–309.

Eyal, Nir, Julio Frenk, Michele B. Goodwin, Lori Gruen, Gary R. Hall, Douglas W.

Hanto, Frances Kissling, Ruth Macklin, Steven Pinker, Lloyd E. Ratner, Harold T.

Shapiro, Peter Singer; Andrew W. Torrance, Robert D. Truog, and Robert M. Veatch,

“An Open Letter to President Barack Obama, Secretary of Health and Human Services Sylvia

Mathews Burwell, Attorney General Eric Holder and Leaders of Congress,” 2014.

Falk, Armin and Nora Szech, “Morals and markets,” Science, 2013, 340 (6133), 707–711.

Fehr, Ernst and Antonio Rangel, “Neuroeconomic foundations of economic choice—recent ad-

vances,” The Journal of Economic Perspectives, 2011, 25 (4), 3–30.

Feltman, Rachel, “You can earn $13,000 a year selling your poop,” Washington Post, January 2015.

Friedman, Milton and Leonard J Savage, “The utility analysis of choices involving risk,” Journal

of Political Economy, 1948, pp. 279–304.

Fudenberg, Drew, David K Levine, and Zacharias Maniadis, “On the robustness of anchoring

effects in WTP and WTA experiments,” American Economic Journal: Microeconomics, 2012, 4 (2),

131–145.

Gneezy, Uri, Silvia Saccardo, Marta Serra-Garcia, and Roel van Veldhuizen, “Motivated

self-deception, identity, and unethical behavior,” unpublished manuscript, University of California,

2015.

Gottlieb, Daniel, “Imperfect memory and choice under risk,” Games and Economic Behavior, 2014,

85, 127–158.

Grant, Ruth W, Strings attached: Untangling the ethics of incentives, Princeton University Press,

2011.

43

Page 44: An Offer You Can’t Refuse? Incentives Change How We Think

Grether, David M., “Bayes Rule as a Descriptive Model: The Representativeness Heuristic,” Quar-

terly Journal of Economics, 1980, 95 (3), 537–557.

, “Testing bayes rule and the representativeness heuristic: Some experimental evidence,” Journal

of Economic Behavior and Organization, 1992, 17 (1), 31–57.

Griffin, Dale and Amos Tversky, “The Weighing of Evidence and the Determinants of Confi-

dence,” Cognitive Psychology, 1992, 24, 411–435.

Havocscope, “Organ Trafficking Prices and Kidney Transplant Sales,” http://www.havocscope.

com/black-market-prices/organs-kidneys/ Accessed November 8 2015.

Held, P.J., F. McCormick, A. Ojo, and J.P. Roberts, “A Cost-Benefit Analysis of Government

Compensation of Kidney Donors,” American Journal of Transplantation, 2015, forthcoming.

Hoffman, Mitchell, “How is Information Valued? Evidence from Framed Field Experiments,”

unpublished manuscript, 2014.

Holt, Charles A. and Angela M. Smith, “An Update on Bayesian Updating,” Journal of Eco-

nomic Behavior and Organization, 2009, 69, 125–134.

and Susan K. Laury, “Risk Aversion and Incentive Effects,” American Economic Review, 2002,

92(5), 1644 – 1655.

Horton, John J., David G. Rand, and Richard J. Zeckhauser, “The online laboratory: Con-

ducting experiments in a real labor market,” Experimental Economics, 2011, 14, 399–425.

Huck, Steffen, Nora Szech, and Lukas Wenner, “More Effort With Less Pay - On Information

Avoidance, Belief Design and Performance,” unpublished manuscript, London Business School, 2015.

Ibrahim, Hassan N, Robert Foley, LiPing Tan, Tyson Rogers, Robert F Bailey, Hongfei

Guo, Cynthia R Gross, and Arthur J Matas, “Long-term consequences of kidney donation,”

New England Journal of Medicine, 2009, 360 (5), 459–469.

Indiana High School Athletic Association, “By-Laws & Articles of Incorporation,” Indianapolis,

IN 2015.

Kahneman, Daniel, Jack L Knetsch, and Richard Thaler, “Fairness as a constraint on profit

seeking: Entitlements in the market,” The American Economic Review, 1986, pp. 728–741.

Kanbur, Ravi, “On obnoxious markets,” in Stephen Cullenberg and Prasanta Pattanaik, eds., Glob-

alization, Culture and the Limits of the Market: Essays in Economics and Philosophy, Oxford

University Press, 2004.

Kentucky High School Athletic Association, “Handbook,” Lexington, KY 2014.

44

Page 45: An Offer You Can’t Refuse? Incentives Change How We Think

Khaleeli, Homa, “The hair trade’s dirty secret,” The Guardian, 28 October 2012.

Koszegi, Botond, “Ego Utility, Overconfidence, and Task Choice,” Journal of the European Eco-

nomic Assiciation, 2006, 4 (4), 673–707.

and Matthew Rabin, “Revealed mistakes and revealed preferences,” The foundations of positive

and normative economics: a handbook, 2008, pp. 193–209.

Kunda, Ziva, “The case for motivated reasoning.,” Psychological bulletin, 1990, 108 (3), 480.

Leider, Stephen and Alvin E Roth, “Kidneys for sale: Who disapproves, and why?,” American

Journal of Transplantation, 2010, 10 (5), 1221–1227.

Lewin, Tamar, “Egg Donors Challenge Pay Rates, Saying They Shortchange Women,” New York

Times, October 2015.

Lord, Charles G, Lee Ross, and Mark R Lepper, “Biased assimilation and attitude polarization:

the effects of prior theories on subsequently considered evidence.,” Journal of personality and social

psychology, 1979, 37 (11), 2098.

Macklin, Ruth, “‘Due’ and ‘Undue’ Inducements: On Paying Money to Research Subjects,” IRB,

1981, pp. 1–6.

Malmendier, Ulrike and Klaus Schmidt, “You owe me,” unpublished manuscript, 2014.

Maniadis, Zacharias, Fabio Tufano, and John A List, “One swallow doesn’t make a summer:

New evidence on anchoring effects,” The American Economic Review, 2014, 104 (1), 277–290.

Martin, Daniel, “Strategic pricing and rational inattention to quality,” Available at SSRN 2393037,

2012.

Mason, Winter and Siddarth Suri, “Conducting behavioral research on Amazon’s Mechanical

Turk,” Behavioral Research, 2012, 44, 1–23.

Massey, Cade and George Wu, “Detecting Regime Shifts: The Causes of Under- and Overreac-

tion,” Management Science, 2005, 51 (6), 932–947.

Massey, EK, LW Kranenburg, WC Zuidema, G Hak, RAM Erdman, Medard Hilhorst,

JNM Ijzermans, JJ Busschbach, and Willem Weimar, “Encouraging psychological outcomes

after altruistic donation to a stranger,” American Journal of Transplantation, 2010, 10 (6), 1445–

1452.

Matejka, Filip and Alisdair McKay, “Rational inattention to discrete choices: A new foundation

for the multinomial logit model,” American Economic Review, 2015, 105 (1), 272–98.

45

Page 46: An Offer You Can’t Refuse? Incentives Change How We Think

McCormick, Patrick T, “Volunteers and Incentives: Buying the Bodies of the Poor,” Journal of

the Society of Christian Ethics, 2007, pp. 77–93.

McKenzie, Craig RM, “Increased sensitivity to differentially diagnostic answers using familiar

materials: Implications for confirmation bias,” Memory & Cognition, 2006, 34 (3), 577–588.

Michigan High School Athletic Association, “The History, Rationale and Application of the

Essential Regulations of High School Athletics in Michigan,” East Lansing, MI 2015.

Milgrom, Paul and Chris Shannon, “Monotone comparative statics,” Econometrica, 1994,

pp. 157–180.

Mjøen, Geir, Stein Hallan, Anders Hartmann, Aksel Foss, Karsten Midtvedt, Ole Øyen,

Anna Reisæter, Per Pfeffer, Trond Jenssen, and Torbjørn Leivestad, “Long-term risks

for kidney donors,” Kidney International, 2014, 86 (1), 162–167.

Moebius, Markus M., Muriel Niederle, Paul Niehaus, and Tanya S. Rosenblat, “Managing

Self-Confidence: Theory and Experimental Evidence,” unpublished manuscript, 2013.

National Bioethics Advisory Commission, “Ethical and policy issues in research involving human

participants,” Rockville, MD 2001.

National Collegiate Athletic Association, “Division 1 Manual,” Indianapolis, IN July 2015.

National Commission for the Proptection of Human Subjects of Biomedical and Behav-

ioral Research, Bethesda, MD., The Belmont report: Ethical principles and guidelines for the

protection of human subjects of research 1978.

Nickerson, Raymond S, “Confirmation bias: A ubiquitous phenomenon in many guises.,” Review

of general psychology, 1998, 2 (2), 175.

Niederle, Muriel and Alvin E Roth, “Philanthropically Funded Heroism Awards for Kidney

Donors,” Law & Contemp. Probs., 2014, 77, 131.

Nuremberg Code, in “Trials of war criminals before the Nuremberg military tribunals under control

council law,” Vol. 10 1949, pp. 181–182.

Peterson, Cameron R. and Lee Roy Beach, “Man As An Intuitive Statistician,” Psychological

Bulletin, 1967, 68 (1), 29–46.

Rabin, Matthew, “Cognitive dissonance and social change,” Journal of Economic Behavior & Or-

ganization, 1994, 23 (2), 177–194.

and Joel L Schrag, “First impressions matter: A model of confirmatory bias,” Quarterly Journal

of Economics, 1999, pp. 37–82.

46

Page 47: An Offer You Can’t Refuse? Incentives Change How We Think

Ratcliff, Roger, “A theory of memory retrieval.,” Psychological review, 1978, 85 (2), 59.

Raven, John C, Guide to the standard progressive matrices: sets A, B, C, D and E, Pearson, 1960.

Rosenberg, Tina, “It’s Time to Compensate Kidney Donors,” The New York Times August 7 2015.

, “Need a Kidney? Not Iranian? You’ll Wait.,” The New York Times July 31 2015.

Roth, Alvin E, “Repugnance as a Constraint on Markets,” Journal of Economic Perspectives, 2007,

21 (3), 2.

Sandel, Michael J, What money can’t buy: the moral limits of markets, Macmillan, 2012.

Satel, Sally and David C Cronin, “Time to Test Incentives to Increase Organ Donation,” JAMA

internal medicine, 2015.

Satz, Debra, Why Some Things Should Not Be For Sale: The Moral Limits of Markets, Oxford

University Press, 2010.

Selten, Reinhard, “Axiomatic characterization of the quadratic scoring rule,” Experimental Eco-

nomics, 1998, 1 (1), 43–62.

Simonson, Itamar and Aimee Drolet, “Anchoring effects on consumers’ willingness-to-pay and

willingness-to-accept,” Journal of Consumer Research, 2004, 31 (3), 681–690.

Sims, Christopher A, “Implications of rational inattention,” Journal of Monetary Economics, 2003,

50 (3), 665–690.

, “Rational inattention: Beyond the linear-quadratic case,” American Economic Review, 2006,

pp. 158–163.

Slowiaczek, Louisa M, Joshua Klayman, Steven J Sherman, and Richard B Skov, “In-

formation selection and use in hypothesis testing: What is a good question, and what is a good

answer?,” Memory & Cognition, 1992, 20 (4), 392–405.

Smith, Vernon L, “Experimental economics: Induced value theory,” The American Economic Re-

view, 1976, pp. 274–279.

Sunstein, Cass R, Daniel Kahneman, David Schkade, and Ilana Ritov, “Predictably inco-

herent judgments,” Stanford Law Review, 2002, pp. 1153–1215.

Svitnev, Konstantin, “Legal regulation of assisted reproduction treatment in Russia,” Reproductive

Biomedicine Online, 2010, 20 (7), 892–894.

47

Page 48: An Offer You Can’t Refuse? Incentives Change How We Think

Tellioglu, G, I Berber, I Yatkin, B Yigit, T Ozgezer, S Gulle, G Isitmangil, M Caliskan,

and I Titiz, “Quality of life analysis of renal donors,” in “Transplantation proceedings,” Vol. 40

Elsevier 2008, pp. 50–52.

Toplak, Maggie E, Richard F West, and Keith E Stanovich, “Assessing miserly information

processing: An expansion of the Cognitive Reflection Test,” Thinking & Reasoning, 2014, 20 (2),

147–168.

Tse, Yiu Kuen and Xibin Zhang, “A Monte Carlo investigation of some tests for stochastic

dominance,” Journal of Statistical Computation and Simulation, 2004, 74 (5), 361–378.

Tversky, Amos and Daniel Kahneman, “Judgment under uncertainty: Heuristics and biases,”

Science, 1974, 185 (4157), 1124–1131.

van den Steen, Eric, “Rational overoptimism (and other biases),” American Economic Review,

2004, pp. 1141–1151.

Vatican Radio, “Pope Francis meets a group of transplant surgeons; including the mayor

of Rome,” http://en.radiovaticana.va/news/2014/09/20/pope_francis_meets_a_group_of_

transplant_surgeons/1106931 September 20 2014.

Wald, Abraham, Sequential Analysis, New York: Wiley, 1947.

Woodford, Michael, “Inattentive valuation and reference-dependent choice,” Unpublished

Manuscript, Columbia University, 2012.

, “An Optimizing Neuroeconomic Model of Discrete Choice,” National Bureau of Economic Re-

search, 2014.

Yang, Ming, “Coordination with flexible information acquisition,” Journal of Economic Theory,

2014.

48

Page 49: An Offer You Can’t Refuse? Incentives Change How We Think

ONLINE-APPENDIX

Sandro Ambuehl“An Offer You Can’t Refuse? Incentives Change How We Think”

Table of Contents

A Experimental Materials 1

B Insect Experiment: Additional Analysis 5

B.1 Randomization Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

B.2 Choice Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

B.3 Un-buyable Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

B.4 Robustness Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

C Online Experiment: Additional Analysis 10

D Additional Experiment: Explicit Choice of Information 13

D.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

D.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

E Model with Continuous State Space 19

F Proofs 21

References 23

Page 50: An Offer You Can’t Refuse? Incentives Change How We Think

A Experimental Materials

Figure 4 displays photographs of the insects used for the experiment in section 4. The following is

a transcription of the videos used in that experiment. The videos are available at https://youtu.

be/HiNnbYuuRcA (“Why you may want to eat insects”) and https://youtu.be/ii4YSGOEcRY (“Why

you may not want to eat insects”).

Transcription: Why You May Want to Eat Insects Five reasons you should consider eating

insects. For your own personal health, and for the overall health of the planet, and, most importantly,

for your pleasure, you should be eating more insects. This isn’t meant as a provocative, theoretical

idea. Here are five very serious reasons why you should consider increasing your insect intake.

First, insects can be yummy. You’d think that insects would have a pungent, unusual aroma. But

they are actually very tasty, and considered a delicacy in many parts of the world. Also, like tofu,

they often take on the flavor of whatever they’re cooked with. That’s why we are on the verge of a

real insectivorous moment in consumer culture. The Brooklyn startup Exo just started selling protein

bars made from ground cricket flour, and the British company Ento sells sushi-like bento boxes with

cricket-based foods. The restaurant Don Bugito in San Francisco’s Mission district offers creative

insect-based foods inspired by Mexican pre-hispanic and contemporary cuisine. “I am trying to bring

a solution into the food market which is introducing edible insects” [Monica Martinez, owner of Don

Bugito]. New cookbooks are entering the market, such as Daniella Martin’s ”Edible”, or van Huis

et al.’s ”The insect cookbook”. Don Bugito’s reviews on yelp are glowing. Most Americans need

some courage to take a bite. But once they do, they are pleasantly surprised. Morgane M., from

Sunnyvale, CA describes her experience: ”I saw their stand at the Ferry Building farmers market

and decided to take the plunge. I tried the chili-lime crickets and they were surprisingly good! For

the curious-but-apprehensive: the chili-lime crickets taste like flavorful, super crunchy (almost flaky)

chips. That’s it. If you’ve ever had super thin tortilla chips, you’ll have an idea what to expect.”

Other people liked them even more. For example Nelson Q. from Las Vegas, NV: ”This Pre-Hispanic

Snackeria has made me a fan .... They had the most interesting menu items of the evening at Off The

Grid ... Would I try insects again??? Yessir!...ALOHA!!! ” Rodney H. from San Francisco agrees:

”It’s great! And the mealworms add kind of a nice, savory quality to it. You never would guess that

you’re eating an insect.”

Second, insects are a highly nutritious protein source. “Insects are actually the most ... one of the

most efficient proteins on the planet“ [Monica Martinez]. It turns out that pound for pound, insects

provide much higher levels of protein compared to conventional meats like beef, chicken, and fish.

While eggs consist to just 12% proteins, and beef jerky clocks in at 33%, a single pound of cricket

flour has 65% protein. That’s twice as much as you get in beef jerky! Insects also have much higher

levels of nutrients like calcium, iron, and zinc. They are also good sources of vitamin B12. That’s an

1

Page 51: An Offer You Can’t Refuse? Incentives Change How We Think

(A) (B)

(C) (D)

(E)

Figure 4: Insects eaten by subjects. A. House cricket (acheta domesticus) B. Mole cricket (gryl-lotalpae) C. Field cricket (gryllus bimaculatus) D. Mealworm (tenebrio molitor) F. Silkworm pupa(bombyx mori).

2

Page 52: An Offer You Can’t Refuse? Incentives Change How We Think

essential vitamin that’s barely found in any plant-based foods and thus can be difficult for vegans to

come by.

Third, our objection to eating insects is arbitrary. Your first reaction to this movie was probably a

sense of dislike. But there’s nothing innate about that reaction. For one, billions of people already eat

insects in Asia, Africa, and Latin America every day. More generally, the animals considered to be fit

for consumption vary widely from culture to culture for arbitrary reasons. Most Americans consider

the idea of eating horses or dogs repugnant, even though there’s nothing substantial that differentiates

horses from cows. Meanwhile, in India, eating cows is taboo, while eating goat is common. These

random variations are the results of cultural beliefs that crystallize over generations. But luckily,

these arbitrary taboos can be defeated over time. There was a time when raw fish – served as sushi

– was seen as repugnant in mainstream US culture. Now it’s ubiquitous. Soon, insects – which are

closely related to shrimp – may be elegant hors d’oeuvres.

Fourth, insects are more sustainable than chicken, pork, or beef. “I think the biggest problem

for United States right now is we eating to much cattle, too much meat” [Monica Martinez.] Insects

are a serious solution to our increasingly pressing environmental problems. It takes 2000 gallons of

water to produce a single pound of beef, and 800 gallons for one pound of pork. How much do you

think is required for a pound of crickets? One single gallon! Producing a pound of beef also takes

thirteen times more arable land than raising a pound of crickets. It needs twelve times as much feed,

and produces 100 times as much greenhouse gases. These very handsome environmental benefits are

why the UN has released a 200 page report on how eating insects could solve the world’s hunger and

environmental problems just two years ago. Needless to say, the UN strongly advocates for insects as

a food source. And it’s not just the UN. In 2011, the European Commission has offered a four million

dollar prize to the group that comes up with the best idea for developing insects as a popular food.

Five, we already eat insects all the time. The majority of processed foods you buy have pieces of

insect in them. The last jar of peanut butter you bought, for instance, may have had up to 50 insect

fragments. A bar of chocolate can have about 60 fragments of various insect species. Some experts

estimate that, in total, we eat about one or two pounds of insects each year with our food. These

insects pose no health risks. The FDA does set limits, but they are simply set for aesthetic reasons

in other words, so you don’t actually see them mixed into your food.

To summarize, these are five very compelling reasons to give it a try. Five, we already eat insects

all the time anyway. Four, insects are more sustainable and ethical than chicken, pork, or beef. Three,

our objection to eating insects is completely arbitrary. Two, insects are a highly nutritious protein

source. One. ”Most of people react really, really positive” [Monica Martinez]. Insects can be very

tasty!

Transcription: Why You May Not Want to Eat Insects Four reasons you may want to avoid

eating insects. Reason 1. Some cultures eat insects. But to those of us who are not used to it,

3

Page 53: An Offer You Can’t Refuse? Incentives Change How We Think

insects can be... well, see for yourself. [American tourist in China] “Welcome to eating crazy foods

around the world with Mike. And we’re in China. If I’ve learned one thing about China it’s they will

eat absolutely everything. So you have caterpillars and you have butterflies. The pupae is what the

caterpillar turns into before it turns into a butterfly. ... they don’t look very appealing at all. But ...

try everything once. So, up to the face. Hhh.” [Eats puppae.] “Not good. Ugh ... it ... it popped.

It popped! It’s just ... it’s just too much for me.” [Throws remaining pupae into trash bin.] [Bear

Grylls] “Whoa! Ready for this? Oh my goodness! Pfh! This one has been living in there a very, very

long time. I’m not gonna need to eat for a week after this. Pfh.” [Eats live beetle larva.] “Argh! This

actually ranks as one of the worst things I’ve ever, ever eaten!”

Reason 2. Insects have many body parts. Most of those parts we do not usually eat in other

animals. Let’s see those parts... [Biology student] “Let’s take a closer look at some of the structures

we see on this grasshopper. So the first thing I want to point out is that it has six legs. There are

two pairs. Here is the pair of hindlegs. There’s a pair in the middle here, on the middle segment of

the thorax. Ok, those are the midlegs. And then there’s another pair on the front here, those are

the forelegs or prolegs. Ok? So there’s six altogether, all insects have six legs, or three pairs of legs,

it’s characteristic of the class. Ok? So we also can see, right up here, there are a pair of wings. On

each side of the body there are two wings. The forewing, k? – as in the one in front – and this is

the hindwing down here, ok? So there are four wings on this animal. Other insects only have two,

some have none. Now we’ll move up to the head. The first thing you’ll notice is this pair of long

antennae. Ok, we’ve seen antennae in other animals. So, clearly, those are involved ... they have a

sensory function. They’re usually involved in a tactile, or a touch sensory function. Some of them

are used in chemoreception, which would be like a smell or taste. And speaking of sensory organs,

we got one more here, which we would be remiss to not mention, uhm, which is the large compound

eye here. So, I’ve made an incision on the dorsal surface of this grasshopper. Ok? And I’ve peeled

back the exoskeleton. And before I go digging too much, uhm, it’s going to be difficult to see many

structures, but on these individuals it’s very easy to see, uhm, all of these very large and pronounced

little sort of tubular looking structures. There’s one right there. Those are all eggs.”

Reason 3. When you eat an insect, you eat ALL of it. In particular, its digestive system, including

its stomach, intestine, rectum, anus, and whatever partially digested food is still in there. [Biology

student] “Now, if we move on to the digestive system... there is a mouth, of course, we talked about

that being down here, ok? The mouth opens into a small pharynx, ok? And then it basically opens up

into this large, dark, thin-walled sack right here, ok? This is the crop. Ok, so this is basically a food

storage pouch right in here. So ... getting to the stomach, that’s what we find next, this thin-walled,

sort of darker colored sack right here, which I’ve just broken a little bit, that, uhm, is the stomach,

all in here, ok? Below the stomach we find this slightly darker and a bit more muscular tube right

here. That is the intestine. And the intestine opens into a short rectum and an anus.”

4

Page 54: An Offer You Can’t Refuse? Incentives Change How We Think

Reason 4. Edible insects are perfectly save to eat. Nonetheless, we tend to associate insects with

death and disease. Even if we know that eating some insects is harmless, this association is difficult

to overcome. [Nature film maker] “Just a few days ago, one of those gaur was killed by a tiger in the

night. This carcass is now probably about five days old, and, as you can see, absolutely riving with

maggots of many different species.”

B Insect Experiment: Additional Analysis

B.1 Randomization Check

The four treatments are balanced across demographic characteristics. Table B.7 displays summary

statistics of these variables by treatment. For each variable, the table reports the p-value of an F -test

for differences in the mean value of the variable across treatments. Of 24 tests conducted, one is

significant at the 5% level, and an additional three are significant at the 10% level. This is within the

expected range.

B.2 Choice Consistency

A participant reveals inconsistent choice behavior if she rejects a transaction at price p in the MPL in

step 3 of the experiment, but accepts it in the treatment decision in step 4, or vice versa.1 Table B.8

details the fraction of each of these types of inconsistencies by treatment. It shows that subjects in

the low incentive treatments tend to state WTA that are too high relative to their behavior in their

$3-treatment decision. No such directional bias is evident for subjects in the high incentive condition.

Importantly, this does not point to a difference across treatments, as the treatment decisions are

different.

The fraction of inconsistent decisions is somewhat higher than is usually found in the literature on

decision making under explicit risk, in which inconsistencies are identified by means of multiple switch-

ing in a price list (e.g. Holt and Laury (2002)). A variety of factors can account for this divergence.

First, the decisions that reveal inconsistencies in the current experiment are temporally separated,

whereas they are typically presented at once in the risky decision making literature. Second, in the

risky decision making literature, the likelihood that one of two decisions that reveal an inconsistency

will be selected for implementation is typically equal, whereas they are highly asymmetric in the

present study.

1The choices subjects’ made in step 6 of the experiment cannot reveal any inconsistencies, as they are made withdifferent information about the transaction than the treatment decisions in step 4.

5

Page 55: An Offer You Can’t Refuse? Incentives Change How We Think

Treatment conditionIncentive $30 $3 $30 $3Video Yes Yes No No

Variable Mean p-value

Male 0.52 0.52 0.56 0.55 0.99Age 21.41 22.01 21.38 21.31 0.37Ethnicity

African-American 0.05 0.06 0.06 0.07 0.73Caucasian 0.57 0.52 0.59 0.56 0.32East Asian 0.19 0.26 0.20 0.23 0.31Hispanic 0.07 0.08 0.05 0.04 0.99Indian 0.03 0.04 0.04 0.07 0.57Other 0.08 0.05 0.07 0.04 0.67

Monthly spending in USD 249.50 300.64 286.45 289.63 0.39Year of studya 3.51 3.60 3.60 3.47 0.35Graduate student 0.13 0.15 0.12 0.05 0.07Field of study

Arts and humanities 0.16 0.09 0.14 0.11 0.04Business or economics 0.27 0.35 0.34 0.43 0.12Engineering 0.20 0.16 0.11 0.12 0.52Science 0.20 0.23 0.27 0.23 0.43Social science (excluding business and economics) 0.17 0.17 0.14 0.11 0.58

Political orientationb 0.50 0.32 0.25 0.08 0.08Raven’s scorec 14.85 14.75 14.66 14.70 1.00CRT scored 3.78 3.80 3.55 3.24 0.11Experience with insects as food

Has intentionally eaten insects before 0.19 0.22 0.19 0.19 0.74Grown up in culture that practices entomophagy 1.30 1.28 1.26 1.31 0.95Grown up eating mostly western foods 0.81 0.73 0.82 0.78 0.09Had a pet that fed on store-bought insects 0.26 0.26 0.23 0.27 0.70Knew that this study concerns insect eating 2.63 2.46 2.53 2.49 0.32

Table B.7: Summary statistics and randomization check. The last column displays the p-value ofthe test of joint significance of a regression of the indicated variable on treatment dummies.

aYear of study only include undergraduate students.bPolitical orientation is measured on a scale of -2 (conservative) to 2 (liberal).cRaven’s score is measured on a scale of 0 to 24.dCRT score is measured on a scale of 0 to 6. Questions from the extended version are included (Toplak et al., 2014).

Video No Video

Low IncentivesWTA > $3 in MPL, accept $3 in treatment decision 15.31% 16.44%WTA < $3 in MPL, reject $3 in treatment decision 1.33% 3.85%Total 16.63% 20.30%

High IncentivesWTA > $30 in MPL, accept $30 in treatment decision 4.88% 8.18%WTA < $30 in MPL, reject $30 in treatment decision 5.77% 6.36%Total 10.65% 14.55%

Table B.8: Inconsistent choices across the multiple price lists before the distribution of the insects,and the treatment decisions. 6

Page 56: An Offer You Can’t Refuse? Incentives Change How We Think

B.3 Un-buyable Subjects

Here I provide further data on the fractions and behavior of participants who refuse all ten offers to

eat an insect an insect in exchange for $60 (the highest amount offered in the study), as well as on

those who always agree to eat any insect for free. Participants who reject every offer in every price

list also reject most offers in the treatment decisions in step 4 of the experiment.2 Amongst these

participants, 291 of 300 treatment decisions are rejected (97%), and 58 of 60 of them (96.7%) reject

all offers in the treatment decisions. All participants who accept every offer in every MPL also accept

every offer in every treatment decision.

Table B.9 lists the frequencies of each type by treatment condition. For un-buyable subjects, the

test of joint insignificance of the treatment dummies cannot be rejected (p = 0.6), and any pairwise

comparison of treatments is insignificant with a p-value of at least 0.25. For participants who accept

every offer, the test of joint insignificance cannot be rejected either (p = 0.33) and any pairwise

comparison of treatments is insignificant with a p-value of at least 0.37.

Video No Video

Always rejectLow Incentives 8.28% 12.59%High Incentives 9.25% 12.12%

Always acceptLow Incentives 3.82% 1.48%High Incentives 2.47% 2.27%

Table B.9: Fractions of participants who reject (accept) every offer in every MPL, by treatmentcondition.

B.4 Robustness Checks

The analysis in section 4 exclude un-buyable subjects. Table B.10 replicates Panel A of Table 3

including those subjects. The effect of information acquisition on participation rates within the high

incentive treatment remains significant, and nearly unchanged in magnitude. The estimate of the

difference in differences loses statistical significance (p = 0.12), but remains similar in magnitude.

For ease of presentation, the analysis in section 4 estimates linear regression models of WTA

excluding un-buyable subjects. This abstracts from two considerations. First, even amongst the

included subjects, a sizable fraction of decisions are censored above (the WTA of the subject for a

particular insect species exceeds $60). Second, random noise may cause a subject to appear un-buyable

in the data, even though she actually is not. This stochastic selection potentially affects the variance

of the parameter estimates.

2Behavior in the treatment decisions in step 4 of the experiment cannot be used for classification, as this wouldintroduce differential selection across treatments.

7

Page 57: An Offer You Can’t Refuse? Incentives Change How We Think

I address both of these issues simultaneously by estimating a panel double hurdle model with

correlated errors (Engel and Moffatt, 2014).3 Briefly, in that model, a participant is one of two types.

An un-buyable type never participates; all his observations are censored for every insect. As in a Probit

model, the type is determined by whether a latent continuous variable exceeds a threshold (this is

the first hurdle). The other type potentially participates, but may also have censored observations,

possibly all of them (this is the second hurdle). These decisions are estimated as in a Tobit model. The

estimation procedure jointly estimates the probability of the types, and the effect of the treatments

on WTA of the subjects who are not un-buyable. It allows for correlation between the classification

error and the effect size errors, and thus differs from a Tobit estimation of the treatment effects on

subjects who are not un-buyable.

Percentage of subjects willing to eat insects for the promised amount

n = 663 Offered $3, Offered $30, Differenceaccept $3 accept $30 $30 - $3

No Video 36.36 59.85 23.48***(3.59) (3.59) (5.03)

Video 37.67 71.30 33.63***(3.16) (2.68) (4.07)

Difference Video - No Video -1.31 11.45** 10.14(4.83) (4.56) (6.46)

Table B.10: This table replicates table 3 including the un-buyable subjects.

Table B.11 displays the estimated parameters. In Column 1 the selection equation includes only a

constant. In Column 2, selection may depend on the video condition. This accounts for the fact that

there is a slightly smaller fraction of un-buyable subjects in the video condition than in the no-video

condition. Estimates are very similar across these two specifications. The treatment effects exceed

those obtained from the OLS estimation on the sample of “buyable” subjects, presumably because

censoring is properly accounted for.

To account for the interval-coded nature of the WTA data, Column 3 presents the estimates of an

interval regression, excluding un-buyable subjects. Again, the estimates are highly similar to those of

the panel double hurdle models, but slightly attenuated.

Finally, Column (4) presents the OLS estimates on the subsample of “buyable” subjects, controlling

for demographic characteristics, Raven’s and CRT scores, and a vector of variables relating to previous

experience with insects. Again, the estimates of the treatment effects are highly similar. Many control

variables do not significantly affect WTA, with some notable exceptions. Males’ WTA is about $5

less than females’, a $1 increase in monthly spending leads to about a $0.01 increase in WTA, and

3I estimate the model using the Stata statistical package with the command xtdhreg by the same authors.

8

Page 58: An Offer You Can’t Refuse? Incentives Change How We Think

having intentionally eaten an insect before, or having experience with a pet that feeds on store-bought

insects decreases WTA by $2 to $7. Finally, subjects with a higher IQ score are more willing to eat

insects. Each additional correct answer to a Raven’s matrix decreases WTA by $0.30 to $0.40. All of

these effects are significant at the 5%-threshold. Perhaps surprisingly, ethnicity does not significantly

affect WTA to eat insects.

(1) (2) (3) (4)Dependent variable WTA before distribution

Difference-in-differences -7.41** -7.41** -7.41* -6.29**(3.00) (2.99) (3.93) (2.99)

Effect of access to videoIn $3 treatment -0.12 -0.18 0.74 0.40

(2.10) (2.11) (2.81) (2.14)In $30 treatment -7.53*** -7.59*** -6.67** -5.88***

(2.15) (2.15) (2.89) (2.24)Effect of incentives

In no video treatment 5.41** 5.39** 5.57* 5.15*(2.25) (2.45) (3.08) (2.37)

In video treatment -2.00 -2.01 -1.83 -1.13(1.98) (1.98) (2.44) (1.83)

Correlation in error terms 0.33*** 0.32** - -(0.13) (0.13)

Model Panel Panel Interval OLSHurdle Hurdle Regression

Explanatory variables in first hurdle VideoControls Yes

Sample All All excluding excludingun-buyable un-buyable

Subjects 663 663 603 603Observations 3218 3218 2918 2918

Table B.11: All regressions include species and university fixed effects, and cluster standard errorsby subject. Controls are gender, age, monthly spending, CRT score, Raven’s score, dummies for raceand college major, and self-reports whether the subject has intentionally eaten insects before, grownup eating mostly western foods, grown up in a culture that practices entomophagy, or ever had a petthat fed on store-bought insects.

C Online Experiment: Additional Analysis

Applying the rationality criterion by Caplin and Dean (2015). For completeness, I here

apply the rationalizability criterion by Caplin and Dean (2015). I use the decision whether or not

to risk the loss of money in exchange for the promised amount alone. (If the continuous investment

decision is also considered, we already know that behavior deviates from Bayesian rationality, and is

9

Page 59: An Offer You Can’t Refuse? Incentives Change How We Think

thus not rationalizable.) The experiment is not explicitly geared towards applying Caplin and Dean’s

criterion, which lacks discriminatory power in this setting. In fact, there are almost as many free

parameters as data points.4 It is thus not surprising that there exists a concave utility function and

a cost of information function that can rationalize the data.

I choose the following notation. u(3) = λ13, u(0.5) = λ20.5, u(0) = 0, u(−0.5) = −λ30.5,

u(−3) = −λ43. I impose the restriction that utility is concave, and hence λ1 ≤ λ2 ≤ λ3 ≤ λ4.

Without loss of generality, I set λ2 = 1.

I find that for λ2 = λ3 = λ4 = 1 and λ1 ∈ [0.54, 0.72], the behavior of subjects in the online

experiment satisfies the rationality conditions of Caplin and Dean (2015). I derive this as follows.

Translated into the notation of the current paper, Caplin and Dean’s NIAS condition is

pG ≥ max αpB , αpB + β

where α = 0−u(πB+m)u(πG+m)−0 and β = u(πG+m)+u(πB+m)−0

u(πG+m)−0 .

Inserting parameters, I find that for the low incentives case, we have α = λ430.5 = 6λ4, and β =

1 − 6λ4. Because λ4 ≥ 1, NIAS reduces to plG ≥ 6λ4plB . Inserting the respective values from Table

6 then yields 0.35 ≥ 6λ40.07. Because λ4 ≥ 1, this condition is violated for the point estimates. If

λ4 = 1, however, there are values within the 95% confidence intervals of the point estimates for which

the condition is satisfied. By concavity of the utility function we then also have λ3 = 1. I proceed on

this assumption.

For the high incentive case, we thus obtain α = 16λ1

, and β = 1 − 16λ1

. For the case that β > 0,

NIAS thus is phG ≥ αphB + β. Inserting the measured probabilities, this is equivalent to λ1 ≤ 0.72.

The NIAC condition is

∆pG ·∆ [u(πG +m)− 0] + ∆(1− pB) ·∆ [0− u(πB +m)] ≥ 0

where ∆(x) denotes the difference in some variable x across the high and low incentive conditions.

Inserting values, we get (0.91−0.35)(λ13−0.5)+(0.39−0.93)(−0.5+3) ≥ 0, or, equivalently λ1 ≥ 0.54.

How do incentives affect ex post regret amongst those who take the gamble? The analysis

in the main text has shown that endogenous attention allocation significantly increases the false

positive rate when incentives are high, but leaves the false negative rate approximately unchanged.

Here I show that these effects are sufficiently pronounced that a stronger result holds: Endogenous

attention allocation substantially increases the fraction of subjects who ex post regret amongst those

who take the gamble when incentives are high (but the p-value of the estimate is 0.13).

4The four data points are the conditional participation probabilities in the case of high and low incentives. Thethree choice variables parametrize the utility function. Additionally, the criterion tests for the existence of a cost-of-information function that can rationalize the data. Implicitly, therefore, that function is an additional free parameter.

10

Page 60: An Offer You Can’t Refuse? Incentives Change How We Think

Table C.12 shows decisions that are ex post regretted, as a fraction of all decisions to accept,

separately for each treatment. Even in the after condition, the incidence of ex post regret is a

significant 12.84 percentage points higher with higher incentives. An explanation is that the possible

net loss subjects can incur is smaller when incentives are higher, so that with heterogenous risk

preferences, a larger fraction of participants are willing to accept, even holding constant posterior

beliefs.

Importantly, the effect is substantially stronger in the after condition in which endogenous atten-

tion allocation is possible. The effect on ex post regret almost doubles, to 23.08 percentage points.

Because these statistics include only a subset of the observations, however, this sizable increase is not

statistically significant (p = 0.13).

Percentage of subjects who ex post regretamongst those who take the bet

Accept for Accept for Differencen = 853 $0.50 $3 High - Low

Learn incentiveAfter examining picture 22.36 35.20 12.84***(cannot skew search) (4.10) (2.71) (4.64)Before examining picture 17.15 40.22 23.08***(can skew search) (3.91) (2.69) (4.47)

Difference before - after -5.22 5.02 -10.23(5.73) (3.69) (6.79)

Table C.12: Percentage of subjects who ex post regret amongst those who take the bet.

Do incentives make subjects ex ante worse off? In principle, incentives can make non-Bayesian

decision makers ex ante worse off. In this experiment, this is not the case, possibly because the increase

in incentives is nearly twice the expected loss from taking the lottery. The expected payoff is $1.21***

(s.e. 0.08) when the incentive is high, regardless of whether or not the incentive is known when the

subject examines the picture. It is -$0.02 (s.e. 0.03) when the incentive is low and this is known, and

-$0.06* (s.e. 0.03) when the incentive is low but this is unknown.5

How far can subjects deviate from Bayes-rational behavior before incentives make them

ex ante worse off? We can estimate the extent of optimism required for incentives to make subjects

ex ante worse off. I find that with the decision making strategies subjects have deployed in this

5Additionally, even though the ability to skew information search in response to incentives significantly increasesthe false positive rate without significantly affecting the false negative rate, giving people this opportunity does notdecrease their expected payoff. (It insignificantly increases by $0.18 (s.e. 0.18).) This is due to the fact that a falsenegatives are six times as costly as false positives in this experiment.

11

Page 61: An Offer You Can’t Refuse? Incentives Change How We Think

experiment, deviations from Bayesian rationality must be rather substantial before incentives make

subjects ex ante worse off.

Specifically, I make counterfactual assumptions about the parameters of the decision problems. I

calculate the expected payoff subjects would have obtained under these parameters, holding constant

their actual behavior, which reflects the belief that the probability of the good state is 0.5 and the

loss from taking the gamble in the bad state is $3.50.

First, I assume that the probability of the good state is µ < 0.5, so that subjects’ behavior is based

on overly optimistic prior beliefs. For the condition in which subjects know the incentive amount when

scrutinizing the picture, the increase in incentives makes subjects worse off if the probability of the

good state µ is between 0 and 2.86%.

Second, I assume that the amount of money πB a subject can lose in the bad state is larger than

$3.50, so that subjects’ behavior is based on an underestimation of the downside. In this case, an

increase in incentives of $3 makes subjects worse off if the actual loss from taking the gamble in the

bad state is at least $4.56 higher than the $3.50 loss subjects (correctly) believed would occur.

Hence, with the decision making strategies subjects have deployed in this experiment, the $3

increase in incentives used in this experiment would not make them ex ante worse off even if they

deviated from Bayesian rationality quite substantially.

D Additional Experiment: Explicit Choice of Information

After having completed the experiment described in section 4, the same subjects completed the ex-

periment described here.6 It essentially parallels the online experiment in section 5, but differs in that

subjects are given an explicit choice between two different information structures, and attentional

mechanisms are minimized.

This experiment complements the experiments in sections 4 and 5 of the main text in two ways.

First, it shows that when facing an explicit choice between information structures, subjects’ choices

align with the theoretical predictions even in an abstract setting in which choosing information struc-

tures optimally is far from trivial. Second, it addresses two potential shortcomings of the online

experiment. On the one hand, the effects of incentives on the continuous investment choice in the

online experiment are possibly exacerbated by anchoring. On the other hand, subjects made this

choice after having decided whether or not to risk the loss of money in exchange for the promised

amount. Even thought subjects could change this decision at any point while they were deciding

about their continuous investment choice, they are possibly influenced by ex-post rationalization. In

the experiment reported here, anchoring countervails rather than exacerbates the effects of incentives

on optimism. Moreover, subjects make a choice that reveals their posterior beliefs before they decide

whether or not to risk the loss of money.

6This experiment was not administered to the first 78 Stanford students.

12

Page 62: An Offer You Can’t Refuse? Incentives Change How We Think

D.1 Design

The design largely parallels that of the online experiment in section 5. I incentivize subjects to risk

losing $10, or nothing, each with prior probability µ = 0.5. They can freely decide whether or not to

take this gamble in exchange for an incentive amount m.

The experiment follows a 2 × 2 within-subjects design. The first dimension varies the incentive

amount for participating in the gamble, which is either high (m ≥ 7) or low (m ≤ 3). There are six

different incentive amounts (m ∈ 1, 2, 3, 7, 8, 9) to prevent subjects from repeatedly having to make

the literally same decision.7 The second dimension varies whether the subject knows the incentive

amount she will be offered at the point she studies information about the consequences of accepting

the gamble (the before condition), or whether she lacks that knowledge (the after condition).

The experiment proceeds in multiple rounds, each of which follows the same four steps. First, the

participant either learns the incentive amount m he will be offered in that round, or that he will learn

that incentive amount in step 3 .

Second, the subject choses to observe one of two information structures. Both information struc-

tures produce signals “G” and “B”. The left-skewed information structure is given by Il = (0.9, 0.5).

It produces signal “G” with probability 0.9 if the state is good, and with probability 0.5 if the state is

bad. The right-skewed information structure is given by Ir = (0.5, 0.1). It produces signal “G” with

probability 0.5 if the state is good, and with probability 0.1 if the state is bad. This decision concerns

the kind rather than the amount of information, in the sense that both information structures have

the same 0.7 unconditional probability of producing a signal that matches the true state. The subject

first selects the information structure. He then learns (in the after-condition) or is reminded (in the

before-condition) of the incentive for taking the gamble, and observes a signal realization.

Third, the subject reveals his confidence about whether or not the lottery will lead to a loss.

Specifically, he decides, on each line of a multiple decision list, whether or not to risk losing $10 in

case the state is bad, in exchange for incentives that vary between $0 and $10 (inclusive) in steps

of $0.5. To lighten the participants’ cognitive load, the incentives and lottery are not presented as

separate entities. Instead, on each line corresponding to incentive m′, the subject decides whether or

not to take a lottery that pays m′ in the good state and (m′ − 10) in the bad state.

Fourth, the subject decides whether or not to participate in the gamble for the incentive offered.

Framing. The experiment is presented in the context of a fisherman fishing from one of two ponds,

called the red fish pond and the striped fish pond (see figure 5).

The fisherman randomly decides which pond to fish from. This corresponds to the state of the

world, and determines whether or not a subject who takes the gamble will lose. Taking the lottery

does not lead to a loss if the fisherman is fishing from the red fish pond, but leads to a loss if he is

7Hence, subjects face one of the following pairs of state-contingent payoffs (m,m − 10) ∈(9,−1), (8,−2), (7,−3), (3,−7), (2,−8), (1,−9).

13

Page 63: An Offer You Can’t Refuse? Incentives Change How We Think

Figure 5: Presentation of information in the explicit information choice experiment. The state of theworld corresponds to the pond the fisherman is fishing from. The fisherman catches one fish from thatpond. Each fish has two properties, a color and a pattern, corresponding to two different informationstructures. Before deciding whether or not to risk losing money, the subject decides whether to askabout the color of the fish the fisherman has caught, or whether to ask about the pattern. The subjectcan only ask about one, and does not learn about the other.

fishing from the striped fish pond. (Whether the red fish pond or the striped fish pond corresponds

to the good state is determined randomly in each round; for ease of exposition, I describe the former

case.)

The subject chooses an information structure and observes a signal realization in the following

way. The fisherman randomly catches a fish from the pond he is fishing from in the current round.

The properties of that fish are the information the subject learns about the state of the world. The

subject chooses an information structure by deciding which properties of that fish to learn about.

Specifically, the subject decides between information structures as follows. Each fish has both a color

(red or not) and a pattern (striped or not). The color, but not the pattern is a distinctive feature of

fish in the red fish pond. 90% of fish in that pond are red, but half of them are striped. By contrast,

the pattern, but not the color is a distinctive feature of fish in the striped fish pond. 90% of fish in

that pond are striped, but only half of them are red. The subject can ask one of two questions about

the fish the fisherman has caught. She can either ask “Is the fish red?” or “Is the fish striped?”, but

not both.

Payment and implementation One round is randomly selected for payment. Within that round,

one decision is randomly selected for implementation. With 80% chance, this is the decision promised

at the beginning of the round. With the remaining chance, it is a decision made in the multiple price

list. Gains are added to the payment from the insect experiment, losses are discounted. This is known

to subjects before they start the insect experiment.

Instructions are presented on screen just before subjects begin with this experiment. To continue,

they have to correctly mark 16 statements about the experiment as true or false. In case of a mistake,

14

Page 64: An Offer You Can’t Refuse? Incentives Change How We Think

the computer does not provide feedback about which statement was marked incorrectly, making it

extremely unlikely that a subject would pass this test by chance.

Each subject completes 14 rounds in individually randomized order. Of these, 6 are in the before-

condition, 4 are in the after condition, and in 4 of them, subjects cannot obtain any information about

the state. The latter four decisions are not analyzed here.8

Hypotheses The strategies that maximize the expected payoff for each incentive amount are pre-

sented in table D.13. On average, a risk-neutral subject should choose the left-skewed (right skewed)

information structure more often than chance when the incentive is high (low).

Specifically, the table shows that for the incentive $9 (leading to state-dependent payoffs (9,−1))

a risk neutral subject should always take the bet, regardless of the signal, and for the incentive $1

(leading to state-dependent payoffs (1,−9)), a risk neutral subject should never bet. For payoffs (8,

-2) and (7, -3), a risk neutral subject should chose the left-skewed information structure Il (which has

a higher ex ante likelihood of yielding signal “G”); for payoffs (3, -7) and (2, -8), the subject should

choose the right-skewed information structure Ir (which has a higher ex ante likelihood of yielding

signal “B”). In each of the latter four cases, the subject should bet if a good signal is observed, and

reject the bet otherwise.

(1) (2) (3) (4)E(payoff) from strategy Bet taken

Never Always Bet only after Choice Realized s = 1 s = 0(m,m− 10) bet bet good signal from frequency mean payoff

information structure of Il(0.9, 0.5) (0.5, 0.1)

(9,-1) 0 4 3.8 2.2 70.71%*** 3.47** 91.85% 77.11%(2.09) (0.21) (1.72) (2.59)

(8,-2) 0 3 3.1 1.9 68.16%*** 2.29*** 86.25% 61.13%(2.18) (0.19) (2.16) (3.03)

(7,-3) 0 2 2.4 1.6 67.25%*** 1.29*** 74.39% 49.45%(2.17) (0.18) (2.89) (3.19)

(3,-7) 0 -2 - 0.4 0.4 46.71% -0.11*** 17.6% 5.95%(2.36) (0.08) (2.46) (1.49)

(2,-8) 0 -3 - 1.1 0.1 41.32%*** -0.24*** 14.55 5.73%(2.15) (0.08) (2.28) (1.55)

(1,-9) 0 -4 - 1.8 - 0.2 42.04%*** - 0.23*** 5.70% 3.61%(2.15) (0.06) (1.43) (1.23)

Table D.13: Expected payoffs for each strategy. Payoffs corresponding to the optimal strategy fora risk neutral subject are printed in bold font. Significance levels on the choice frequency concern thenull hypothesis that the choice frequency is 50%. Significance levels for realized payoff concern thenull hypothesis that realized payoff is equal to the maximal possible expected payoff.

8In these decisions, the least amount for which subjects are willing to participate in the gamble is significantly lowerwhen incentives are high. This is suggestive of wishful thinking.

15

Page 65: An Offer You Can’t Refuse? Incentives Change How We Think

D.2 Analysis

Choices of information structures As predicted, subjects are significantly more likely than

chance to choose the left skewed information structure Il = (0.9, 0.5) when the incentive amount

is high. They are also significantly more likely than chance to choose the right skewed information

structure Ir = (0.5, 0.1) when the incentive amount is low. Hence, subjects’ choice of information

significantly deviates from the random choice benchmark of 0.5 in the direction predicted by theory,

as Column 1 of Table D.13 reveals. Given that it is far from trivial to intuitively determine the payoff

maximizing information structure, this is a remarkable finding.

Choices of bets Table D.14 reports subjects’ choices of whether or not participate in the gamble in

exchange for the incentive offered, by treatment. Panel A pools across states, Panels B and C display

choices separately for each state.

When pooled across states (Panel A), the treatment effects directionally replicate the findings from

the insect and online experiments. They are, however, much smaller, and insignificant. In contrast to

the online experiment, being able to skew information demand in response to the incentive amount

decreases the false negative rate and leaves the false positive rate unchanged (Panels B and C).

Given that subjects in this experiment have only two information structures to choose from, this

is not surprising. For many subjects neither of the two information structures may be such that

particular realizations make them change their mind about betting. By contrast, in the online and

insect experiments, subjects have access to a rich set of information structures, and thus are much

better able to select information structures that make them change their mind about betting. Another

explanation is that attentional channels, and mechanisms relating to the interpretation of information,

through which psychological mechanisms can affect beliefs, are dampened.

Beliefs This experiment provides further evidence that higher incentives make subjects systemat-

ically more optimistic in a way that is inconsistent with Bayesian rationality. The least amount for

which a subject is willing to risk losing $10 is a monotonically decreasing function of her posterior

beliefs. Hence, I can apply the criterion developed in section 3 to test for violations of the law of

iterated expectations induced by the treatment conditions.

Higher incentives make subjects more optimistic. Subjects in the high incentive treatment who

know the incentive amount when deciding about the information structure demand, on average,

$0.22*** (s.e. 0.06, clustered by subject) less for risking to lose $10 than those in the high incen-

tive treatment. For subjects who do not know the incentive amount when choosing the information

structure, this amount drops by about half, to $0.13* (s.e. 0.08).9

I formally test for a first order stochastic dominance relationship using the statistic by Davidson

and Duclos (2000) as in section 5. For subjects who know the incentive amount before they decide

9The difference in these effect sizes is not statistically significant.

16

Page 66: An Offer You Can’t Refuse? Incentives Change How We Think

Percentage of subjects willing to take the lottery

Accept for Accept for Difference$1, $2, or $3 $7, $8, or $9 High - Low

A. Both states

Learn incentiveAfter selecting question 8.27 71.99 63.72***(cannot skew search) (0.92) (1.47) (1.72)Before selecting question 8.74 73.16 64.42***(can skew search) (0.79) (1.22) (1.40)

Difference before - after 0.47 1.17 0.70(1.07) (1.61) (1.95)

B. Good state only (correct positives)

Learn incentiveAfter selecting question 12.45 79.62 67.17***(cannot skew search) (1.46) (1.86) (2.31)Before selecting question 12.55 84.25 71.70***(can skew search) (1.46) (1.46) (1.85)

Difference before - after 0.10 4.63** 4.53*(1.82) (2.08) (2.74)

C. Bad state only (false positives)

Learn incentiveAfter selecting question 4.35 63.06 58.71***(cannot skew search) (0.94) (2.10) (2.29)Before selecting question 5.07 62.10 57.03***(can skew search) (0.82) (1.81) (1.93)

Difference before - after 0.73 -0.96 -1.68(1.16) (2.57) (2.80)

Table D.14: Fraction of decisions in which participants chose to take the bet. Part A showsparticipation rates pooled over states. n = 5, 310. Parts B and C show participation rates in thegood and bad states, respectively. n = 3, 716 and n = 3, 728, respectively. Standard errors are inparentheses, clustered by subject. Coefficients are estimated using university and species fixed effects.Estimates in Panel A weight observations according to the prior probabilities of the states. *, ** and*** denote statistical significance at the 10%, 5% and 1% levels, respectively. Asterisks are suppressedfor levels.

which question to ask, an increase in the incentive amount leads to a first-order stochastic dominance

decrease in the least amount for which they are willing to risk losing $10. The change is significant

at the p = 0.05 level. By contrast, if subjects learn the incentive amount only after deciding which

question to ask, no statistically significant first-order dominance relation can be detected (p > 0.2).

17

Page 67: An Offer You Can’t Refuse? Incentives Change How We Think

E Model with Continuous State Space

In this section, I show that the two-states assumption made in section 3 is inessential to the qualitative

predictions of the rational model. I extend the model to a continuous state space. In particular, this

allows a decision maker to learn not only about the likelihood that the consequence of a transaction

will be good or bad, but also about how good or bad it may be. I still find that higher incentives

increase the demand for information about the upside of the transaction, and decrease the demand

for information about the downside. I also find that if the costs of information are proportional to

Shannon mutual information, then an increase in incentives increases the probability an agent ex post

regrets participating conditional on having participated. In addition, I show that posterior-separability

of the cost of information function (in the sense of Caplin and Dean (2013b)) is sufficient for higher

incentives to increase the false positive rate.

Setup The setup differs from that in section 3 only to the extent that the state space Ω is continuous,

Ω ⊆ R. Specifically, an agent whose preferences are quasilinear in money can decide whether or not to

participate in a transaction. If he abstains, he receives utility 0. If he accepts, he receives a monetary

payment m ≥ 0, and stochastic, non-monetary utility u(ω) with u : Ω → R increasing and u(0) = 0.

The agent is imperfectly informed about ω and thus about his utility from accepting the transaction.

His prior distribution of ω is given by a probability density function µ(ω). Before deciding whether

or not to accept the transaction, the agent can obtain information about ω. As in section 3, I directly

model the agent as choosing state-dependent participation probabilities pω = P (accept|ω). The cost

of a vector of state dependent acceptance probabilities (pω)ω is given by c((pω)ω)

)∈ R.

Analysis The agent’s utility from state dependent acceptance probabilities (pω)ω is

U = E[(u(ω) +m

)pω]− c((pω)ω

)(3)

Here, the expectation is taken with respect to the agent’s prior beliefs. To illustrate, note that with

incentives m, a perfectly informed agent would accept if ω +m ≥ 0 and reject otherwise.

I consider an increase in the incentive for participation from m to m′ > m. Such a change leads to

the substitution and stakes effects outlined in section 3. On the one hand, higher incentives change

the stakes of the decision, and thus lead the agent to acquire a different amount of information. If it

causes the agent to purchase more information, he will increase pω for those ω in which accepting is

optimal, and decrease pω for the ω for which rejection is optimal. On the other hand, higher incentives

make false positives cheaper, and they make false negatives more expensive. Hence, the agent will

purchase a different kind of information; pω will increase for all ω, including those for which rejection

is optimal. Which of these effects outweighs depends on the cost of information function. Figure 6

illustrates.

18

Page 68: An Offer You Can’t Refuse? Incentives Change How We Think

!

!(!""#$%|!)!

!!−!!!−!′!

compensation!!

false!acceptance!!

correct!acceptance!!

Figure 6: Effects of a change in incentives on the agents’ information with u(ω) = ω. The figureplots, for each state of the world ω the probability that the agent accepts the offer given his optimalinformation demand. With incentives m = 0, optimal information demand could, for instance, lead toa schedule of acceptance probabilities P (accept|ω) depicted by the bold line. An increase in incentivesto m′ > m increase P (accept|ω) for all ω > −m, if the income and stakes effects have the samedirection. Depending on the cost of information function, the stakes effect or the substitution effectmay dominate for ω < −m′. These cases are illustrated by the dashed and dot-dashed schedules,respectively. In case of a posterior-separable cost of information function, the substitution effectdominates.

I now show that posterior separability (Caplin and Dean (2013b)) of the cost of information

function is a sufficient condition for the substitution effect to outweigh (higher incentives increase false

positives). To define this property, I use the following notation. I write p = E(pω) =∫pωµωdω for the

unconditional probability that the agent participates if his state-contingent participation probabilities

are given by (pω)ω. Moreover, I write γωG = pωµp for the density at ω of the posterior belief distribution

in case the agent participates, and γωB = (1−pω)µ1−p for the respective entity in case the agent abstains.

The cost function c is posterior separable if there exists a strictly convex function f : [0, 1]→ R such

that c can be written in the following form

c ((pω)ω) = −E [f(µ)] + pE [f(γωG)] + (1− p)E [f(γωB)]

If f is the negative of the binary entropy function, f(x) = x log(x) + (1− x) log(1− x), then c is the

Shannon mutual information cost function.

Part (i) of the following proposition formally shows that an increase in incentives increases the false

positive rate if the cost function is posterior separable. Specifically, for all ω < −m′, the agent will ex-

post regret if he participates if the state is ω. Because under posterior separability all pω increase, this

means that in particular the false positive probability increases. Note also that posterior separability

19

Page 69: An Offer You Can’t Refuse? Incentives Change How We Think

is also a sufficient condition for the required sign of the cross-derivative in the special version of this

model outlined in section 3.

Part (ii) of the proposition helps to better understand the implications of Shannon mutual infor-

mation costs, by applying a result from Matejka and McKay (2015). It shows that with Shannon costs,

the shape of the function ω 7→ pω depends on prior beliefs only through the unconditional participa-

tion probability p. Hence, it is independent of the shape of the prior belief distribution. Consequently,

Shannon mutual information costs limit the effectiveness of certain information provision policies.

Finally, part (iii) of the proposition shows that with Shannon mutual information costs, higher

incentives increase the probability that the agent ex post regrets participation conditional on having

participated.

Proposition 4.

(i) If c is posterior separable, and if m′ > m, then for all ω ∈ Ω, pω(m′) ≥ pω(m).

(ii) If c is proportional to Shannon mutual information with factor of proportionality λ, then pω is

strictly increasing in ω, and for any m, pω+m is a function only of p and λ.

(iii) If c is proportional to Shannon mutual information, and P(u(ω) ∈ [−m′,m]

)is sufficiently

small, then an increase in incentives from m to m′ > m increases the probability the agent

regrets conditional on having participated.

F Proofs

Proof of proposition 1 Part (i) is proved in the main text. To show Part (ii) I use theorem 1

and lemma 2 of Matejka and McKay (2015) to explicitly characterize the state-contingent acceptance

probabilities pω for ω ∈ G,B and the unconditional acceptance probability p = µpG + (1− µ)pB in

case of Shannon mutual information costs. They are given by the following equations.

∀ω ∈ G,B : pω =

[1 +

(1

p− 1

)exp

− 1

λ(πω +m)

]−1(4)

0 =µ

p+(exp

1λ (πG +m)

− 1)−1 +

1− µp+

(exp

1λ (πB +m)

− 1)−1 (5)

The probability of regret conditional on having participated, P (s = B|accept) is given by

pB(1− µ)

p=

1− µp+ (1− p)e− 1

λ (πB+m)(6)

where the equality follows directly from (4).

I show that d[pB(1−µ)

p

]= ∂

∂p

[pB(1−µ)

p

]dp + ∂

∂m

[pB(1−µ)

p

]dm > 0. From equation 5 it follows

that dpdm > 0, so that it suffices to show that each of the partial derivatives in the previous expression

20

Page 70: An Offer You Can’t Refuse? Incentives Change How We Think

is positive. That the second is positive is obvious from equation (6). Regarding the first, note that

since πB +m < 0, the exponent in the denominator of the right hand side of equation (6) is positive.

Hence the denominator is a convex combination of 1 and some number larger than 1, with weight p

on the former. Hence, the denominator is decreasing in p, and hence the expression is increasing in

p. Fixing p, it is increasing in m. Moreover, from (5) it is apparent that p is increasing in m. Hence,

the probability of regret given in (6) is increasing in m.

Proof of proposition 3 Fix q > 12 and δ with −(1 − q) ≤ δ ≤ 1 − q. Then, (q + δ, 1 − q + δ) is

a valid information structure, and the claim to be shown concerns the comparative statics regarding

delta. Write γG(δ) = µ(q+δ)µ(q+δ)+(1−µ)(1−q+δ) and γB(δ) = µ(1−q+δ)

µ(1−q+δ)+(1−µ)(q+δ) for the Bayesian posterior

upon receiving the “G” and “B” signal, respectively. Write γG(δ) = µ(q+βδ)µ(q+βδ)+(1−µ)(1−q+βδ) and

γB(δ) = µ(1−q+βδ)µ(1−q+βδ)+(1−µ)(q+βδ) for the respective posteriors of a non-Bayesian agent. The first claim

to be shown is that

∂δ

[γG(δ)− γG(δ)

]=βµ(p− q)(p+ βδ)2

− µ(p− q)(p+ δ)2

> 0

where p = µq + (1− µ)(1− q). Note that due to q ≥ 12 , we have q > p. Consequently, the above can

be rearranged to β ≤(p+βδp+δ

)2, which is equivalent to β(p+ δ)2 ≤ (p+ βδ)2, and further to βδ2 ≤ p2.

Because |δ| ≤ 1 − q and p ≥ 1 − q, the previous expression is true for all β ≤ 1. The proof of the

second claim follows from symmetry.

Proof of proposition 4

(i) The agent maximizes (3) by choosing (pω)ω. I use Topkis’ theorem to prove the claim. We must

show that the objective function has increasing differences both in (pω, pω′) and in (pω,m) for

all ω, ω′ ∈ Ω. As m enters the utility function additively, the latter is trivially true.

To show the former, write p := E(pω). By posterior separability of the cost function, taking

derivatives of the objective function with respect to pω yields

∂U

∂pω= ω +m− µω [f(γωG) + f(γωB)]− pf ′ (γa)

(µωp− pωµ

p2

)−(1− p)f ′(γωB)

(−µω1− p

+(1− pω)µ2

ω

(1− p)2

)= ω +m− µω [f(γωG) + f(γωB)]− f ′ (γa)µω(1− γωG) + f ′ (γr)µω(1− γωB) (7)

21

Page 71: An Offer You Can’t Refuse? Incentives Change How We Think

For ω′ 6= ω, pω′ enters the above expression only through p. Therefore, we have that ∂2U∂pω∂pω′

=

∂2U∂pω∂p

∂p∂pω′

= ∂2U∂pω∂p

µω′ , so that it suffices to show that ∂2U∂pω∂p

> 0. Indeed,

∂2U

∂pω∂p= −µωf ′′(γωG)(1− γωG)

∂γωG∂p

+ µωf′′(γωB)(1− γωB)

∂γωB∂p

Since∂γωG∂p = −γ

ωG

p and∂γωB∂p =

γωB1−p , by the definition of γωG and γωB we obtain

∂2U

∂pω∂p= f ′′(γωG)γωG(1− γωG) + f ′′(γωB)γωB(1− γωB)

which is nonnegative due to f ′′ > 0 and γωG, γωB ∈ [0, 1].

(ii) Part (ii) of the above proposition derives from Matejka and McKay (2015) whose theorem 1

and lemma 2 imply that the state-contingent acceptance probabilities pω and the unconditional

acceptance probability p satisfy the following equations.

∀ω ∈ Ω : pω =

[1 +

(1

p− 1

)exp

− 1

λω

]−1(8)

0 =

∫ [p+

(exp

1

λω

− 1

)−1]−1dµ(ω) (9)

By recognizing that a change in incentives can simply be modeled as a shift in the prior µ, the

claims in part (ii) of the above proposition become obvious. Most notably, the shape of the

function pω is independent of the shape of the prior µ.10

(iii) I assume P(u(ω) ∈ [−m′,−m]

)= 0. The extension to the case in which P

(u(ω) ∈ [−m′,−m]

)is sufficiently small follows by continuity. Moreover, without loss of generality, I set m = 0. We

have P (regret|participate) = P(u(ω) +m ≤ 0 |participate

)=∫ 0

−∞pωp dµ(ω). By equation (8),

this is equivalent to

P (regret|participate) =

∫ −m′

−∞

1

p+ (1− p) exp[−u(ω)+m

λ

]dµ(ω)

The denominator is decreasing in m′ as well as decreasing in p (the latter because the inte-

gral is taken only over ω for which ω < −m′). Because by part (i) of this proposition, an

increase in m′ increases pω for all ω, it increases p. Consequently, an increase in m′ increases

P (regret|participate), as was to be shown.

10This, in turn, is an implication of the fact that with Shannon mutual information costs, the marginal costs ofadditional precision in a state ω are proportional to the likelihood of that state.

22