Transcript
Page 1: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

R

MA

JHC

a

ARRA

KMMMMN

C

T

0d

Neuroscience and Biobehavioral Reviews 36 (2012) 1249–1264

Contents lists available at SciVerse ScienceDirect

Neuroscience and Biobehavioral Reviews

jou rna l h omepa ge: www.elsev ier .com/ locate /neubiorev

eview

oral dilemmas in cognitive neuroscience of moral decision-making: principled review

.F. Christensen ∗, A. Gomilauman Evolution and Cognition, Associated Unit to the IFISC (CSIC-UIB), Department of Psychology, University of the Balearic Islands, University Campus, Building: Guillem Cifre deolonya, 07122 Palma, Spain

r t i c l e i n f o

rticle history:eceived 31 August 2011eceived in revised form 12 January 2012ccepted 6 February 2012

eywords:

a b s t r a c t

Moral dilemma tasks have been a much appreciated experimental paradigm in empirical studies on moralcognition for decades and have, more recently, also become a preferred paradigm in the field of cogni-tive neuroscience of moral decision-making. Yet, studies using moral dilemmas suffer from two mainshortcomings: they lack methodological homogeneity which impedes reliable comparisons of resultsacross studies, thus making a metaanalysis manifestly impossible; and second, they overlook control

oral dilemmasoral decision-makingoral judgmentoral psychologyeuroethics

of relevant design parameters. In this paper, we review from a principled standpoint the studies thatuse moral dilemmas to approach the psychology of moral judgment and its neural underpinnings. Wepresent a systematic review of 19 experimental design parameters that can be identified in moral dilem-mas. Accordingly, our analysis establishes a methodological basis for the required homogeneity betweenstudies and suggests the consideration of experimental aspects that have not yet received much attention

despite their relevance.

© 2012 Elsevier Ltd. All rights reserved.

ontents

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12501.1. The context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12501.2. Moral dilemma research to date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1250

2. Rationale behind a moral dilemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12513. Dilemma formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252

3.1. Presentation format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12523.2. Expression style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12533.3. Word framing effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12533.4. Word number count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12543.5. Participant perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12543.6. Situational antecedent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12553.7. Order of presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12553.8. Type of question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12553.9. Justifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256

4. The experimental participant and her relatedness to the story characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12564.1. Demographic variables of the participant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12564.2. In/outgroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12564.3. Kinship/friendship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12594.4. Speciesism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259

5. Dilemma conceptualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5.1. Intentionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5.2. Kind of transgression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

∗ Corresponding author at: University of the Balearic Islands, University Campus, Deparel.: +34 971 25 9777; fax: +34 971 173190.

149-7634/$ – see front matter © 2012 Elsevier Ltd. All rights reserved.oi:10.1016/j.neubiorev.2012.02.008

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1260

tment of Psychology, Building: Guillem Cifre de Colonya, 07122 Palma, Spain.

Page 2: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

1250 J.F. Christensen, A. Gomila / Neuroscience and Biobehavioral Reviews 36 (2012) 1249–1264

5.3. Directness of harm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12605.4. The trade-off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12605.5. Normality of harm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12605.6. Certainty of events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1261

6. Schematic overview of 25 studies in relation to the 19 design parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12617. Conclusion and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262

7.1. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12627.2. Future directions: a theoretical framework for working hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262

. . . . . .

1

1

pcsisehcw

fTiwaawoi

afej

s(tCmfcjhi(opfb

i

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

“. . .the use of artificial moral dilemmas to explore our moral psy-chology is like the use of theoretical or statistical models withdifferent parameters; parameters can be added or subtracted inorder to determine which parameters contribute most significantlyto the output.” (Hauser et al., 2007)

. Introduction

.1. The context

A morning press release in a daily newspaper informs the well-rotected public of a terrible incident in a foreign war-sweptountry. Apparently rebel troops assaulted a little village on theearch for enemy soldiers and recklessly killed also innocent civil-ans. Frightened, the village inhabitants had hidden together inmall groups – so not all had been found and killed. When thenemy had finally left, however, it turned out that one of the womenad smothered her baby. She had been trying to keep it quiet so itsries wouldn’t give away her group’s hiding place to the soldiers,ho would have killed them all on the spot.

Such horrible stories about real-life scenarios where people areorced to make moral decisions about life and death, flicker over ourV screens every day. Or, they are brought to us by the radio, via thenternet when checking our emails, or by friends who tell us about

hat they’ve “heard”. Confronted with this kind of information, well immediately have feelings about it, and we make judgments ofpproval and reproach. Some of us may even be asking ourselves,hat would I have done in her place? Of course we then happily rec-gnize how lucky we are not having to make such moral decisionsnvolving life and death.

But, what if one is taken to a cognitive neuroscience laboratorynd asked the same question? Would you. . .? In the quest for theoundational principles of human moral cognition, cognitive sci-ntists have done exactly this: asked experimental participants toudge such morally dilemmatic situations.

It is clear that this experimental set-up does not allow totudy real-life-or-death decisions, but this is not the intention herean analysis of real-life decisions made in dilemmatic situationshroughout the turmoils of the 20th century would serve that goal).onversely, moral judgments of hypothetical real-life moral dilem-as provide the cognitive scientist with valuable insight into the

oundational psychological processes that underlie human moralognition. Thus, considering the example above, human moraludgment generally deems it wrong to kill a baby, but experimentsave shown that there are many variables that influence how an

ndividual eventually judges a moral transgression such as this onesmothering the baby). What if the person to sacrifice to save thethers was not a baby, but a fellow adult? A foreigner? What if therotagonist (here the mother) would not be killed if the soldiers

ound them, only the men in the group? Would you smoother youraby if . . .? And so forth.

When compared to other experimental approaches usedn empirical studies of moral cognition – such as paradigms

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262

involving semantic judgments to sentences with moral content(Heekeren et al., 2003), judgments of disgust and indignation tosentences with moral-emotional connotations (Moll et al., 2005), ormoral judgments after participation in game tasks such as the Dicta-tor or Ultimatum games (Hofmann and Baumert, 2010; Takezawaet al., 2006) – moral dilemmas present a series of advantages: first,they permit the inclusion of many more variables in the formula-tion than in single sentences only, making possible a more holisticapproach. Second, they allow the inclusion of all these variablesunder a higher level of experimental control, as compared to otherapproaches: the dilemmas are exactly the same for each individ-ual participant and not subjected to the variability that may occurwhen different individuals – and even actors – intervene in theexperiment. Third, a skeptic may remark that the extreme nature ofsome moral dilemmas simply grasps the reader’s attention in suchbrusque manor that subtle variations in the dilemma formulationsuffice to trigger distinct moral judgments. However, reality tellsa different story. As shown by means of the press release exampleabove, even extreme moral conflicts may be part of all individu-als’ everyday life. Moral dilemmas allow one to elicit these moralconflicts and to thoroughly investigate which parameters our basicmoral intuitions respond to – and all this, under a high level ofexperimental control.

Consequently, a growing number of authors argue that moraldilemmas – such as the famous Crying Baby dilemma above (Greeneet al., 2001) – offer a valuable tool to study closely which fac-tors trigger the underlying psychological processes that constitutethe foundations of human moral cognition (Greene, 2008; Haidtand Graham, 2007; Hauser et al., 2007; Nichols and Knobe, 2007).This approach will ultimately allow us to draw conclusions aboutreal-life moral decision-making which draw on these foundationalpsychological processes.

1.2. Moral dilemma research to date

The past decade has witnessed a blossoming of studies in MoralPsychology and Neuroethics. Following the lead of Damasio andhis colleagues (Damasio, 1995) in providing neuroimaging evi-dence that emotional processing is involved in decision-making,many studies that have focused on moral judgment and ethicaldecision-making have given rise to models that also link moraljudgment to emotion (Greene, 2008; Greene et al., 2002, 2004,2001; Haidt, 2001; Moll et al., 2002, 2005). Among the many exper-imental paradigms used in this field of research, moral dilemmasare very popular. The reason, as the initial quote points out, is clear:by means of an adequate moral dilemma design, this methodologyallows to systematically explore how distinct parameters mod-ulate our moral judgment (Hauser et al., 2007). However, there

has been a lack of principled analysis of the relevant parametersinvolved in moral dilemma formulation, and research appears tohave proceeded in a rather piecemeal fashion. Now, the method-ological heterogeneity in the field has reached such a level that the
Page 3: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

nd Bio

oa

pcmp

2

miasflicvetmbcCch

PiamdatttachaoKprci2a

(tmwimTotoeowat

J.F. Christensen, A. Gomila / Neuroscience a

btained evidence is neither necessarily comparable nor replicablecross studies.

In this paper, we review the variations in experimentalaradigms using moral dilemmas, and their criticisms. Our overallonclusion is that, in spite of all criticism, the use of moral dilem-as can be a promising research strategy, when their foundational

arameters are accounted for in their complexity.

. Rationale behind a moral dilemma

A moral dilemma is a short story about a situation involving aoral conflict. A moral conflict is a situation in which the subject

s pulled in contrary directions by rival moral reasons. It entails thewareness of the incompatibility of two courses of action and theirubsequent outcomes. For some ethicists, sui generis moral con-icts are theoretically impossible as they are incompatible with an

nclusive moral theory – such conflicts would amount to an unac-eptable blindspot (Sorensen, 1988). However, from the point ofiew of Moral Psychology it is obvious that we can – and often do –xperience moral conflicts. Moral conflicts can be of many differentypes, such as (i) conflicts between personal interests and accepted

oral values, (ii) conflicts between different duties, (iii) conflictsetween sets of apparently incommensurable values, and even (iv)onflicts stemming from one unique moral principle, as in Sophie’shoice (Styron, 1979). Typically, the scenario that we face in a moralonflict – and hence also in a moral dilemma – is that both optionsave important moral reasons to support them.

Kohlberg (1964) was the first to use moral dilemmas in Moralsychology. He was interested in the development of moral reason-ng, that is, how reasons given to prefer one choice over another in

conflict situation change with age and degree of moral develop-ent. He was sensitive to Piaget’s position that children come to

istinguish two distinct kinds of moral reasons: “one that judgesctions according to their material consequences and one that onlyakes intentions into account. These two attitudes may co-exist athe same age and even in the same child [. . .]; we have thereforewo processes partially overlapping, but of which the second gradu-lly succeeds in dominating the first” (Piaget, 1932/1965). Kohlbergontended that moral development culminates when a personolds strong principles, in a Kantian way (Kohlberg, 1964). Thispproach has been criticized for several reasons such as (i) forver-intellectualizing moral judgment, (ii) for its explicit plea forantian ethics over Utilitarianism, and (iii) for overlooking the biasroduced by post hoc rationalizations about a choice, as when theeasons given for the choice do not necessarily reflect the actualause of that choice, but are the fruit of mere post hoc rational-zation and the phenomenon of moral “dumbfoundedness” (Haidt,003). Thus, of course, a Kantian agent may also be challenged by

moral dilemma (e.g. in a conflict of duties).In Neuroethics, moral dilemmas were first introduced by

Greene et al., 2001), as a way to develop a paradigm for experimen-ally induced “cognitive conflict” in this area (Gomila, 2007), and

any studies have followed suit. They found inspiration in Ethicshere dilemmas are sometimes used as “thought experiments”, as

ntuition-pumps that can reveal conceptual inconsistencies in ouroral intuitions. As a matter of fact, the inspirational dilemmas, the

rolley and Bystander dilemmas (Foot, 1967; Thomson, 1976), wereriginally instrumental in arguing for the inconsistency of Utili-arianism (or Consequentialism, in general) as an ethical theory –ne which advocates choosing the option that produces the high-st welfare for the largest number of individuals involved. In the

riginal Trolley dilemma, a runaway trolley is heading for five rail-ay workers. The driver of the trolley has the option to divert it to

nother track where only one worker will be killed. Given the situa-ion, the option of killing one instead of five seems justified to most

behavioral Reviews 36 (2012) 1249–1264 1251

people. According to Foot (1967), though, if consequences were allthere was for the morally best option, as Utilitarianism contends, itwould also be morally right for a surgeon to kill one patient to savefive by transplanting the former’s organs to her other patients – anoption that everybody rejects. Utilitarianism, in conclusion, missesa morally relevant difference between both situations, which hasnothing to do with “the best overall outcome”. Our moral psychol-ogy responds to finer differentiations than the mere motivation toobtain the “higher pay-off” or the “lesser evil”.

In Thomson’s version of the trolley dilemma, two scenarios arecontrasted again. In the first, the protagonist can change the courseof the trolley by pulling a switch which will redirect the trolley on toanother track where it will kill “only” one railway worker insteadof five; or, if she considers this to be morally wrong, omit to carryout any action. In the footbridge version of the dilemma, the pro-posed action to save five workers is different, whilst the plot andthe options are the same. This time, the participant has the choicebetween pushing with his own hands a large person onto the tracksin order to stop the trolley from killing the five. While most peo-ple intuitively accept that the switch should be pulled in the firstscenario, most people consider that they would not push the largeman, even if the consequences are the same: one person is killedto save five. Given that the balance of gains and losses, in all thesesituations, is the same (one dead to save five), Utilitarianism againcannot account for this fully as an ethical theory: it fails to capturea morally relevant aspect of cognition that distinguishes betweenthe first and the second pairs of scenarios.

The shortfall of Utilitarianism has also been discussed on thegrounds that it requires giving the same moral weight to the inter-ests of unknown people and future generations, as that given tofamily and friends. From this it follows that, apparently, Utilitar-ianism is an impersonal theory of moral right, while our moralpsychology is a radically personal one (Williams, 1973). Echoingthis criticism of Utilitarianism, Greene et al. have argued that whatdifferentiates the first kind of situation from the second is thatthe first is an impersonal dilemma, while the second is a personalone. Ironically, however, at the same time they took for grantedthat most people are utilitarian in their moral judgment (Greeneet al., 2004, 2001). We will come back to the Personal–Impersonaldistinction in Subsection 5.3, “directness of harm”.

For experimental purposes, moral dilemmas are usually vari-ations of the basic trolley/bystander/footbridge scenarios. Eachmoral dilemma presents a short story about a situation involv-ing risk of harm to one or more individuals. In the last part of amoral dilemma, an alternative course of action is proposed (involv-ing action vs. action omission by the participant) which would sparesome of these individuals, but not all. This alternative course typi-cally results in less harm in terms of overall numerical outcome, orin other benefits as compared to the first envisaged outcome. How-ever, it involves committing a moral transgression towards a thirdparty, otherwise not harmed (see the trade-off in Subsection 5.4).Such harm can be of several forms such as physical harm, killing,or social harm – as for instance, lying, stealing, unfair behavior,lack of respect (see Subsection 5.2 for an analysis of the differentkinds of transgression). For this alternative course to take place, theprotagonist of the story has to leap in, carrying out the proposedmoral transgression (variations to find here are in the directnessof harm variable, see Subsection 5.3), which may lead to intendedharm (i.e. the use of harm as a means) or harm as a side effect (i.e.harm as a non-intended side effect, see Subsection 5.1, “intention-ality”). After reading the dilemma, the experimental participant isasked to judge whether or not the protagonist should carry out the

moral transgression. This choice is commonly referred to as moraljudgment.

Given this general structure of the dilemmas, for the pur-pose of our analysis, we divide the relevant experimental design

Page 4: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

1252 J.F. Christensen, A. Gomila / Neuroscience and Biobehavioral Reviews 36 (2012) 1249–1264

Fig. 1. In this figure we summarize the variables discussed throughout this paper as being important in moral dilemma design. The figure shows the time course of a singleexperimental trial presenting a moral dilemma to an experimental participant. It shows the different variables that can be manipulated by the experimenter. The moraljudgment of the participant will vary as a function of the weighting of the different variables. The large grey arrow shows the time course of the dilemma as presented to theparticipant. As she begins to read the dilemma, she obtains information about the preliminary situation of the dilemmatic encounter and is asked to take a perspective tothe protagonist of the story. Then, the dilemmatic situation is described. Subsequently, the dilemma formulation will lead the participant to understand the two options ofoutcome that there are for the dilemma and propose an action of moral transgression to change the first described course of events. With this, the participant sees herself inthe situation of having to choose between two sets of victims. The first set would be harmed if the participant chooses not to intervene in the situation and the second set ofvictims would only be harmed if the participant chooses to carry out the proposed moral transgression. It is very important how the dilemma is formulated which we discussin Section 3 of this paper. Furthermore, in Section 4 we show how the experimental participant will be influenced by a wealth of variables concerning her relatedness to theo ilemmo ventu

phraeismso

3

3

cpmici

ther characters in the dilemma. Finally, in Section 5 of this paper we explore the dutset. The variables described in all three sections interact in the process that will e

arameters into three categories: (i) dilemma formulation refers toow the story is presented, what kind of participant response isequired and how to trigger answer effects by manipulating vari-bles such as the expression style or the wording of the dilemmastc. (Section 3); (ii) participant characteristics describes the exper-mental participant and her relatedness to the characters in thetory (Section 4); and (iii) dilemma conceptualization describes theorally relevant elements that characterize the situation as under-

tood by the participant (Section 5) (See Fig. 1 for a schematicverview of the dilemma structure).

. Dilemma formulation

.1. Presentation format

In behavioral studies, the experimental task can either beomputer-based or participants can be asked to complete a simpleen-and-paper questionnaire. A drawback of the pen-and-paper

ethod is that RT assessment is not possible. Besides, an exper-

mental comparison between the pen-and-paper method and theomputerized presentation could be of interest, to find out whethert makes a difference to have a questionnaire in front of you, where

a conceptualization which has to be explicit in the dilemma formulation from theally lead the experimental participant to carry out one moral judgment or another.

you see your answers and can go back to read them, or to answerby means of a computer keyboard, without this possibility. In neu-roimaging studies the presentation format is usually computerized.Participants read the dilemmas and make their judgments insidethe scanner (as, for instance, in Greene et al., 2004 and Greeneet al., 2001), or merely read the dilemmas inside the scanner andmake their judgment after the neuroimaging session (Berthoz et al.,2002).

Recently, web-based computerized presentation has become apopular option due to the advantage of assessing a broader par-ticipant pool which is not limited to the most commonly usedparticipant population: undergraduates (Kraut et al., 2004; Noseket al., 2002). Nevertheless, the obvious disadvantage of web-basedadministration is that no rigorous control can be made of par-ticipants’ responses to questions such as age, gender, religiousaffiliation, nationality or even of the operational competence inthe language in which the dilemmas are written (Hauser et al.,2007). See also Greenwald et al. (2003, p. 199) for a discussion of

limitations of web site data in social cognition tasks.

Another aspect of the presentation format is how the task isactually presented on a computer screen. Commonly, dilemmasare displayed on the screen as blocks of text (Greene et al., 2001)

Page 5: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

nd Bio

(prp

uboctswtTtaiaoi

tohtfreioTtrbtts(

aSdrp1dic

bfiaid

oIibfhkce

J.F. Christensen, A. Gomila / Neuroscience a

though another option was explored by Greene et al. (2008)) whoresented the dilemmas as horizontally streaming text (left toight) with a 36 pt. courier font, at approximately 16 characterser second (Greene et al., 2008).

As a general recommendation for future neuroimaging studiessing moral dilemmas, we suggest using the procedure proposedy Greene et al. (2001) – which has been followed also by numer-us others (Borg et al., 2006; Moore et al., 2011a,b). It consists of aomputer-based set-up, using three text-screens for the presenta-ion of each dilemma. The first two bits of text are presented on theame screen: the first explains the general situation and the second,hich is added to the previous screen by button-press, proposes

he alternative course of action including the moral transgression.he third screen clears the previous two and consists of the ques-ion eliciting the moral judgment of the participant. This clear orderllows one to control explicitly for the moment in which the partic-pant is exposed to each piece of information in the dilemma (seelso Subsections 3.6 and 3.7 for discussions on the importance of therder of presentation and on the impact of a situational antecedentn the process of judgment formation).

The last aspect of the presentation format regards the presen-ation time of each screen, i.e. whether to leave the task self-pacedr to introduce a time limit. We suggest that each screen shouldave a time limitation. This springs from two considerations. Firstly,he dilemma task is rather long and runs the risk of participantatigue. Secondly, if the aim is to trigger basic moral intuitions,esponse time should not be free, as this could result in delib-rate moral reasoning. Deliberate, explicit reasoning may maskmplicit moral intuitions, as shown by another popular methodol-gy used in social cognition experiments: the Implicit Associationest1 (IAT) (Greenwald et al., 1998). The authors of the task con-end that the IAT effect (prolonged RT towards certain attributes)eveals implicit attitudes that people have which they might note aware of, or might choose not to reveal if asked explicitly. Thus,his task is an alternative to explicit questionnaire methods becausehe responses obtained by the latter method may be biased byelf-presentation strategies (fakability) or by introspective limitsunawareness) (Schnabel et al., 2008a,b).

The possibility of unawareness of one’s motives and implicitttitudes has also been assessed in moral dilemma research (seeubsection 3.9 for a discussion of the use of justifications in moralilemma tasks), and the danger of fakability in moral dilemma tasksemains an unsolved issue. Social desirability is a well-documentedhenomenon (Crowne and Marlowe, 1960; Edwards and Horst,953; Richman et al., 1999) and is likely to occur also in moralilemma tasks. Consequently, strategies to prompt fast and more

ntuitive (or implicit) moral judgments constitute an importanthallenge for future studies.

In addition, we recommend a closer examination of a proposaly Greenwald and colleagues to check participant data individuallyor its validity before analyzing the whole data set and eliminatenconsistent or suspicious data points. Their suggestion stems from

n effort to develop a better scoring key for the IAT and involves tak-ng into account each participant’s response latencies (their SD) toetermine whether a trial should be included or not. The algorithm

1 Brief description of the IAT: the IAT aims to assess implicit attitudes to a varietyf social phenomena by measuring their implicit underlying automatic evaluation.t does this by measuring automatic associations between a bipolar target (fornstance, “me” vs. “others”) with a bipolar attribute (for instance, “shy” vs. “socia-le”). Thus, participants are instructed to carry out a series of word sorting tasks “asast as possible”. The rationale of the task is that faster responses will occur when twoighly associated words (for instance, “white + pleasant”) require the same responseey instead of different ones. Conversely, for less implicitly associated words or con-epts (such as perhaps for some people “black + pleasant”) a longer response time isxpected when these two words require a single response key.

behavioral Reviews 36 (2012) 1249–1264 1253

involves, for instance, systematically eliminating trials with laten-cies over a certain time point, and excluding the complete data setof a participant if 10% of trials are under a minimum time point forresponse latencies (Greenwald et al., 2003); and see also Cvenceket al. (2009) for an example of how the authors detect fakabilitywith statistical methods.

3.2. Expression style

Borg et al. (2006) argued that the lack of control of expressionstyles in dilemma formulation could bias the subsequent moraljudgments. From this it follows that, if dilemmas of one dilemmatype are formulated with a very colorful style, using flowery-emotional adjectives, whereas the dilemmas of the other type arecrammed with technical vocabulary and abstract reasoning con-cepts, then there is a reasonable danger that differences in moraljudgment, emotional arousal, and underlying neural activity maybe due to the differences in the expression style – and not dueto the intended conceptual differentiations between dilemmas.In fact, in their 2006 fMRI study, these authors found a signifi-cant effect of the language used in the dilemma formulation onneural activity. To test their assumption they had created a dra-matic (colorful) and a non-dramatic (non-colorful) version of eachdilemma. The results showed that while the behavioral effects ofthis manipulation were modest, there was a significant interactionof Language × Morality in brain structures related to emotion pro-cessing (anterior cingulate cortex, the posterior orbitofrontal gyrusand the lateral temporal pole). Therefore, the authors conclude thatfuture studies should make an effort to standardize the amount ofdescriptive and dramatic language in different dilemma categories(Borg et al., 2006).

Another relevant piece of evidence in this regard comes froma slightly different methodological set-up. Nichols and Knobe(2007) approached the question, not by studying moral judg-ments to dilemmas, but by going a step ahead in the chain ofevents, asking their participants to judge the moral responsibility ofagents after the latter have committed a moral transgression. Theyspecifically investigated whether and how peoples’ attribution ofmore or less moral responsibility to agents for committed wrong-doing could be manipulated. They found that peoples’ attributionsof moral responsibility varied depending on how the questionwas formulated. Thus, following abstract theoretical formulations,people did not consider the agent responsible for her deeds, espe-cially when they thought the agent had no alternative option,or if the moral transgression itself was viewed as predeterminedby previous actions or by natural laws. Conversely, emotionallysalient (dramatic) formulations triggered explicit attribution ofmoral responsibility in exactly the same, predetermined situations.Therefore, these authors also conclude that the moral intuitions ofa person are very likely to be influenced by the affective wordingwith which a piece of information is presented in a moral dilemma(Nichols and Knobe, 2007).

These results suggest that when dilemmas are translated intoother languages, it is advisable to control for or to standardize theexpressive style and vocabulary in those languages. Story writing isa complicated matter and most people are not necessarily trainedin literary translation, so some care is needed here.

3.3. Word framing effects

Word framing effects have been shown to affect decision-making (Tversky and Kahneman, 1981). Classical work demon-

strates that under certain circumstances, experimental participantsviolate a very basic principle of rationality: the principle of invari-ance. This principle asserts that one’s choices ought to depend onthe situation itself, not the way it is described. However, research
Page 6: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

1 nd Bio

heisp(s

PdfopeHdopcs

mctcngoctoc(iotdcnae

Fmmdbas

a(Gdipatiagatwpr

254 J.F. Christensen, A. Gomila / Neuroscience a

as shown that people have different preferences over exactlyquivalent situations because of the way they are described. Fornstance, people prefer a situation in which half a population isaved to one in which half the population dies in an epidemy;eople also prefer a “cash discount” over a “credit card surcharge”Thaler, 2008). See more on this issue also in Mikhail (2007) in hisection on structural descriptions (Mikhail, 2007, p. 145).

In research using moral dilemmas, Petrinovich et al. (1993),etrinovich and O’Neill, 1996 stressed the danger of framing effectsue to use of different vocabulary in the dilemmas. For instance, dif-erent moral judgments were elicited depending on whether theutcome of the action was expressed using the word kill, as com-ared to the word save (Petrinovich and O’Neill, 1996; Petrinovicht al., 1993). This is an aspect not yet acknowledged in neuroethics.owever, it is an important issue also conceptually related to theiscussion about the action vs. omission bias (see Subsection 5.1n intentionality), which designates the different moral importanceeople attribute to ways of describing the agent’s implication in theourse of action in the scenario, even if the general outcome is theame.

Finally, the “framing effects” also raise the general question oforal dilemmas being textual stimuli which inevitably trigger pro-

essing in brain regions involved in language processing – a facthat could possibly lead to confounds regarding the origin andause of neural activation. In fact, a skeptical argument againsteuroimaging studies of moral judgment has been leveled on therounds of a metaanalysis by Ferstl and colleagues, who pointedut that the neural underpinnings of moral judgment are suspi-iously similar to those involved in different levels of language andext comprehension.2 This challenges the assumption of specificityf these regions for moral judgment formation and for other pro-esses associated to social cognition, such as Theory of Mind (ToM)Ferstl et al., 2008). Similarly, skepticism has been voiced specif-cally regarding the frontal regions of activation found in studiesn moral judgment with text stimuli. In this case, the alterna-ive explanation offered proposes that these activations could beue to different levels of task demands associated with the pro-ess of text comprehension: as task demands go up due to theeed for higher levels of text comprehension, neural activity islso enhanced (Hashimoto and Sakai, 2002; Love et al., 2006; Peellet al., 2004).

However, both these criticisms overlook two central points.irst, they ignore the fact that studies of moral judgment usingoral dilemmas primarily aim to contrast judgments to two orore types of dilemmas. Hence, if the formulations of different

ilemma types are held constant, any confounding activity will

e cancelled out because any “problematic” component is sharedcross conditions. Second, studies using pictures instead of texttimuli in ToM and moral judgment tasks report similar patterns

2 Studies on moral judgment with moral dilemmas have repeatedly reportedctivations in the cingulate cortex, the dorsolateral and medial prefrontal cortexmPFC and DLPFC) and the temporo-parietal-junction (TPJ) (Greene and Haidt, 2002;reene et al., 2004; Moll and Schulkin, 2009; Young and Saxe, 2009a). However, aifferent set of studies has shown that these areas are all also specifically implied

n various levels of text comprehension. For instance, the TPJ is found active in therocess of comprehension of ambiguous text, while comprehension of coherent textctivates PCC/Precuneus, DLPFC and mPFC. Metaphor comprehension is associatedo activation in the anterior temporal regions, lateral PFC and TPJ. Furthermore, theres a proposal suggesting that the temporal poles are not only implied in memorynd episodic memory processes as classically assumed, but also specifically in inte-rating language into a coherent representation, by associating syntactic, semanticnd episodic sources of information. In particular, it appears that the posterior por-ion of the temporal gyrus is implied in a coherence analysis together with the TPJ,hile the mid portion is activated for basic language processing and the anterior andosterior regions ultimately integrate the multimodal information into a coherentepresentation.

behavioral Reviews 36 (2012) 1249–1264

of neural activation as studies that use text (Cikara et al., 2010;Ciaramidaro et al., 2007).

Of course, it remains an endeavor for future testing to fine-tuneour knowledge of the extent to which some regions of neural acti-vation are proper to language processing, or shared, with socialor moral cognition. However, the involvement of a brain area inmultiple functions is not an exception but the rule (Gomila andCalvo, 2010), and it appears evident that some processes of lan-guage comprehension are shared with other processes of moraldilemma comprehension. Nonetheless, we agree that it is crucialto make sure that linguistic components (syntactic complexity,semantic ambiguity, text coherence, expression style, etc.) are care-fully controlled for.

3.4. Word number count

In 2001, Greene and colleagues elaborated a dilemma set whichhas been repeatedly used in studies on moral judgment. However,this dilemma set included dilemmas that were very different interms of word number. Especially unfortunate was that their twodilemma types had rather divergent mean word numbers (Borget al., 2006; Moore et al., 2008).

This is a particularly relevant issue for fMRI studies, where theBlood oxygen level dependent (BOLD) response has a particulartiming. It is about 46 s long before it returns to baseline, whereit should be at the start of the subsequent trial. For this reason, ide-ally, trial length (here, word number) should be kept constant interms of reading time between experimental conditions in order tohave approximately the same BOLD response timing in each trial.Furthermore, as studies using moral dilemmas imply a lot of read-ing, keeping the dilemmas as short as possible prevents participantfatigue.

3.5. Participant perspective

The formulation of the moral dilemma can implicitly or explic-itly impose a perspective on the experimental participant, likelyto affect her judgment. In some studies, the experimental partici-pant is either asked to take the perspective of a protagonist: “youare standing on a footbridge over trail tracks as . . . X . . . happens”whereas in others, the dilemma is described in the third person:“David is standing on a footbridge over trail tracks as . . . X . . . hap-pens”. Participants’ answers were shown to differ, depending onthe participant perspective in the dilemma (Royzman and Baron,2002).

At the neural level, the activation that correlates with the expe-rience of agency is different from the activation observed whenmerely observing the same action committed by another person(Farrer and Frith, 2002). If we take moral emotions into account(Tangney et al., 2007), in the case of judging one’s own morallyright or wrong actions, emotions like guilt or shame or even fear ofnegative social evaluation will guide our moral judgment (Berthozet al., 2006; Finger et al., 2006; Takahashi et al., 2004). Conversely, ifwe are judging another person’s action, emotions such as indigna-tion and anger will prevail in moral judgment (Moll et al., 2008a,b;Shaver, 1985). Consequently, especially neural activity in emotionprocessing regions is likely to differ between the two perspectives.A recent study provides support for this view, showing a differ-ential neural signature when participants judged their own andothers’ moral and immoral actions (Zahn et al., 2009). Accordingly,the authors discuss that this difference is mainly related to the dif-ferential moral emotions that are elicited by one’s own actions

(shame, guilt, pride; enhanced neural activity: PFC and anteriortemporal lobe activations) and those of others (indignation, anger,praise; enhanced neural activity: lateral orbitofrontal cortex, insulaand DLPFC activations).
Page 7: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

nd Bio

mpdtdt

3

dsppsgambnaTdkB

3

sa1ipntTjsmitt

tYtSatdtrifatol

3

pd

J.F. Christensen, A. Gomila / Neuroscience a

Given that the underlying neural, cognitive and emotionalechanisms are not the same, we conclude that the two participant

erspective formats can not be considered equivalent for moralilemma formulation. Hence, a challenge to dilemma formulation iso take into consideration that human moral psychology respondsifferently to the observation and commission of moral harm, ando the individual’s perspective on it.

.6. Situational antecedent

The antecedent or “initial circumstances” variable refers to anyescription of actions or situations that have led to the dilemmaituation. It might be a reference to the previous physical or inter-ersonal situation or a concrete allusion to previous actions by therotagonist or the future victim (Mikhail, 2007). For instance, aituational antecedent could reveal that the proposed moral trans-ression would, as a matter of fact, be an act of self-defense (Nicholsnd Mallon, 2006). Such a situation can lead a participant to beore inclined to consenting harm, than if such a previous encounter

etween agent and victim had not been described. In real-life sce-arios, a situational antecedent will almost always have preceded

problematic situation and this will influence our moral judgment.his fact makes this variable an interesting means of triggeringivergent moral judgments for the same action, depending on theind of antecedent (Bjorklund, 2003; Cushman, 2008; Royzman andaron, 2002; Young and Saxe, 2009b).

.7. Order of presentation

Early research in Moral Psychology revealed that the pre-entation order of intentions and consequences in a situationalntecedent influences moral judgment (Karniol, 1978; Keasey,978). In general, this variable, as the former two (word fram-

ng effects and situational antecedent descriptions), concerns theragmatic understanding of narratives, and the well-known phe-omenon of implicature (Grice, 1975): people understand morehan is said, even if care is taken to make the story fully explicit.hus, how the story is deployed affects how it is understood andudged. To deal with this aspect of narrative understanding, weuggest maintaining the order of presentation of all relevant infor-ation constant throughout all the dilemmas of a set. However, an

nteresting experimental manipulation is also to counterbalancehis presentation order to explore the impact of such a manipula-ion on moral judgment.

This procedure has been largely overlooked in Neuroethics –o our knowledge – with only one, very remarkable exception.oung and Saxe (2008) showed how the order of presentation ofhe elements of a dilemma altered the underlying neural activity.pecifically, precuneus and the temporo-parietal-junction showedn enhanced pattern of activation when the agent’s belief abouthe moral transgression was presented at the beginning of theilemma, as compared to when it was presented later in the narra-ive. The authors suggest that this piece of evidence underlines theelevance of order of presentation in triggering differential moralntuitions. Consequently, it suggests a wealth of possibilities foruture studies to test specifically how different kinds of foreshadownd background knowledge, on the one hand (Subsection 3.6), andheir order of presentation to the experimental participant, on thether (this subsection), may trigger distinct moral intuitions thatead to differential moral judgments.

.8. Type of question

In some studies the participant is asked to rate the appro-riateness of the action, mostly in an appropriate–inappropriateichotomy (Franklin et al., 2009; Greene et al., 2004, 2001;

behavioral Reviews 36 (2012) 1249–1264 1255

Moll et al., 2008a,b; Valdesolo and DeSteno, 2006; Waldmannand Dieterich, 2007). In others, the participant states whethershe would choose to carry out the depicted action in a yes–nodichotomy (would you . . .?) (Koenigs et al., 2007), which maybe preceded by another dichotomic question asking whether theaction is right or wrong (is it right to . . .?) (Bjorklund, 2003; Borget al., 2006; Fumagalli et al., 2009; Nichols and Mallon, 2006).Another alternative is to ask the participants to indicate their judg-ment in a permissible–forbidden scale (Cushman, 2008; Cushmanet al., 2006).

However, beyond framing effects lurking here (see Subsection3.3), it is not the same to ask whether an action is permissible orappropriate. The first term relates to normative thinking about thelegal permissibility of the action (especially, as its counterpart isforbidden), while appropriate suggests whether the participant findsthe action obligatory to the situation (where obligation implies per-mission, but not the other way around). Similarly, the right–wrongdichotomy hints very much towards the legal permissibility of theaction. On the other hand, the would you . . .? question does notallow the distinction between the course of action and its normativestatus (one could decide to do what she takes to be wrong).

Considering this panorama, a promising avenue of research isto collect systematic evidence about moral judgments and theirneural underpinnings from one single dilemma set with differ-ent question formulations. This strategy has been followed by atleast three studies so far, yet with slightly diverging conclusions.Cushman (2008) showed that when asked for the wrongness orpermissibility of a moral transgression, participants appeared torely on mental state information about the agent only, such as inthe classical example “was it wrong of Peter to drive although he wasdrunk?” Conversely, for the assignation of punishment and blame,both mental state and the causal link between the agent and theharmful consequences appear important, such as in “should Peterbe punished for driving although he was drunk and ran over a girlon his way home? The authors argue that their findings show thattwo distinct psychological processes underlie these two types ofjudgment: the first begins with the harmful action and analyzesthe mental states behind that action, then judges whether whatthe agent did was right or wrong (driving although he knew that hewas drunk). In absence of harmful consequences the process stopshere and no punishment or major blame is assigned. However, ifconsequences do occur, the causal chain that lead to the action –including the mental state – becomes more relevant and blameand/or punishment are assigned severely (in line with the discus-sion of moral responsibility by Nichols and Knobe (2007) outlinedin Subsection 3.2; see also the analysis of in Subsection 5.1). Thefact that Joe ran over a girl is blameworthy enough, yet the fact thathe knew he was drunk when he drove off makes the assignation ofblame and punishment even more severe.

Also O’Hara et al. (2010) argue that different questions mayentail differential moral judgments which would ultimately inter-fere with the objective to develop a unified theory of moraljudgment (O’Hara et al., 2010). They address this concern empiri-cally by using four different question labels: wrong, inappropriate,forbidden and blameworthy and found that people judged moraltransgressions more severely when the words “wrong” or “inap-propriate” were part of the formulation, than when the words“forbidden” or “blameworthy” were used. The authors do not makeany specific suggestions about which factors in the formulation maybe triggering these differences. Furthermore, surprisingly, in con-trast to Cushman (2008), they conclude that the relatively smalleffect sizes in their study demonstrate that the question formats

are equivalent and that consequently results across studies withdifferent question formats can legitimately be compared.

This conclusion is unjustified for several reasons: (i) the find-ings by Cushman (2008) outlined above; (ii) the fact that the small

Page 8: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

1 nd Bio

estiystaimvd

ttpo

asu

3

fcwteosMejot

aoieuhatj

4s

4

tdevcgease2

256 J.F. Christensen, A. Gomila / Neuroscience a

ffect sizes of their study might be due to a rather heterogeneousample (web-based assessment); and (iii) also the following studyhat suggests otherwise. Borg et al. (2006) found different behav-oral effects following the question Is it wrong to . . .? and Wouldou? (Borg et al., 2006). As compared to judgments of nonmoralcenarios, the question Would you . . .? resulted in faster RT, whilehe question Is it wrong to. . .? did not show any differences in RTs compared to the nonmoral condition. Again, no specific analysiss provided as to which conceptual or moral intuition distinctions

ay underlie these differences. The authors merely suggest that iniew of this finding, it is likely that deciding what to do is processedifferently than deciding whether an action is wrong or not.

For the scope of the present review it will suffice to conclude thathese three studies show that there is potential for promising futureesting ahead to determine more reliably which moral intuitions –sychological processes – are specifically triggered by which kindf linguistic label in the question formulation.

As to the question format – dichotomic or Likert scales – thereppears to be a tendency to use dichotomic answer formats acrosstudies. See Table 1 for the different questions and response formatssed throughout studies.

.9. Justifications

Despite criticisms of Kohlberg’s method of asking for reasonsor the judgment after it was made, evidence suggests that in someircumstances this procedure may be an interesting way to studyhether the participants are aware of the variables that make

hem judge a dilemmatic scenario in one way or another (Cushmant al., 2006). It is the contrast between apparently similar pairsf situations which elicits divergent moral judgments that makesuch a justification task suitable for some research hypotheses.oral “dumbfoundedness” is not the default situation but rather an

xception, given that reflexive moral reasoning can influence moraludgment. Dissociations between reflexive reasons and intuitivenes then, when they occur, are also a relevant piece of evidencehat any model has to account for.

In consequence, we find this procedure a very elegant way tossess the participants’ awareness of their motivations for makingne or another judgment. Yet, we suggest, firstly, to make partic-pants elaborate these justifications only at the end of the wholexperiment (and not after each dilemma) and, secondly, to waitntil the end of the experiment before telling them that they willave to give such justifications for their judgments, in order to avoid

possible conflict between the judgment the participant wisheso make and the anticipation of a failure to provide a satisfactoryustification for it.

. The experimental participant and her relatedness to thetory characters

.1. Demographic variables of the participant

Evidence from studies on the influence of participant charac-eristics on moral judgment is still controversial. However, there isata to support the idea that they can – and actually do – influ-nce moral judgment to some extent. The crucial demographicariables of the experimental participant are, primarily, ethnic andultural background, socio-economical status, educational back-round (Hauser et al., 2007), age (Wang, 1996), gender (Fumagallit al., 2009, 2010), political orientation (Graham et al., 2009; Haidt

nd Graham, 2007), level of religiosity (Hauser et al., 2007), thinkingtyle (Lombrozo, 2009), need for cognition (Bartels, 2008; Cacioppot al., 1984) and sensitivity to reward and punishment (Moore et al.,011a,b).

behavioral Reviews 36 (2012) 1249–1264

Although these authors do not always report differences inmoral judgment as a function of these variables, often they suggestthat failing to find such differences could be due to the homogene-ity of their experimental groups. O’Neill and Petrinovich (1998)propose that the effects of ethnic background or nationality onmoral judgment should be studied in the respective countries of thenationalities to be compared, and not just selecting two nationali-ties that happen to live in the same country. In their 1998 studyon cross-cultural differences in moral judgment they elegantlyconfirmed such effect of ethnic and nationality by comparing anU.S. sample with a Taiwanese sample in their respective countries(O’Neill and Petrinovich, 1998).

4.2. In/outgroup

The in/out group variable (Cikara et al., 2010; O’Neill andPetrinovich, 1998; Petrinovich et al., 1993) indicates whether thepossible victims of the moral transgression belong to the par-ticipant’s social group. Does he or she have the same ethnicbackground, nationality, gender, age or socio-economical status?

In spite of the importance of this variable, only few studies onmoral judgment successfully make the explicit effort to rule outpossible answer effects specifically due to in/out group differences(Borg et al., 2006). Others suggest merely controlling for this vari-able, using anonymous agents (Hauser et al., 2007). However, onemight object that in real-life scenarios, agents are never anony-mous, and that it is very important to account for social biases inan inclusive theory of moral judgment: social categorization alwaysoccurs (Hewstone et al., 2002; Nelson, 2006; Rabbie, 1981).

In the neuroimaging literature, Cikara et al. (2010) showed howthe acceptability of causing harm to one to save many can be alteredby manipulating whether the individual to be sacrificed belongs tothe protagonist’s social group or not. Participants were more willingto sacrifice individuals of extreme outgroups (such as the home-less) to save ingroup members (“good Americans”), than fellow“ingroup targets”. This study shows that the in/out group variableis crucial also for moral dilemma design due to the psychologicalphenomenon of social categorization (Cikara et al., 2010). In fact,Cikara et al. based their study specifically on the assumption thatsocial categorization is due to stereotype content information inthe stimuli. Given this hypothesis, they aimed to empirically testthe Stereotype Content Model (Fiske et al., 2007, 2002). This modelcontends that individuals classify other individuals into four basiccategories in a two-dimensional space, where the two dimensionsare warmth and competence (high vs. low warmth, and high vs. low-competence individuals). According to this model, which emotionsthe observer will have when something good or bad happens to theindividual in the story – and which actions she will be willing toengage in for that individual – will depend on the grouping into oneof the resulting four categories. For instance, if it is a high-warmthand low-competence individual (elderly, children), harm will resultin feelings of pity in the observer which will subsequently lead toprosocial helping behaviors. Conversely, if it is a low warmth andlow-competence individual (homeless), harm towards this individ-ual is more likely to provoke feelings of disgust which will makethe observer withdraw, and so forth. Consequently, as this initialevidence with dilemmatic picture stimuli by Cikara et al. (2010)demonstrates, as with moral dilemma tasks with text stimuli, thein-/outgroup variable shows promise for experimental manipula-tion because it reflects a general bias of the human mind.

Apart from controlling explicitly for this variable in order to tar-

get distinct moral judgments as a function of these variables, wepropose, furthermore, to include a measure of prejudice level in theexperimental procedure as a means of experimental control. Eitherin the format of a questionnaire (Phillips and Ziller, 1997), or, by
Page 9: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

J.F. Christensen,

A.

Gom

ila /

Neuroscience

and Biobehavioral

Review

s 36

(2012) 1249–1264

1257

Table 1Summary of selected studies that use moral dilemmas for moral judgment tasks. Sorted by the variables the authors have controlled for.

Methodology Variables

Study Type ofquestion*

(Subsec-tion3.8)

Presenta-tionformat**

(Subsec-tion3.1)

Type ofstudy:behav-ioral/neurosci-entific/lesion***

Partici-pantperspec-tive****

(Subsec-tion3.5)

Intentio-nality(Subsec-tion5.1)

Situatio-nalantecedent(Subsec-tion3.6)

Variablesof thevictim(Subsec-tions 4.2and 4.3)

Variablesof the par-ticipant(Subsec-tion4.1)

Kin-/friendshipvariable(Subsec-tion4.3)

Type ofharm/transgres-sion(Subsec-tion5.2)

Numericaloutcome/trade-off(Subsec-tion5.4)

Directnessof harm(Subsec-tion5.3)

Certaintyof events(Subsec-tion5.6)

Normalityofdilemma(Subsec-tion5.5)

In-/outgroup(Subsec-tion4.2)

Selfsacrificecon-trolled?

Wordnumbercount(Sub-section3.4)

Express-ion style(Subsec-tion3.2)

Dilemma pool by Greene et al.Greene et al.(2001)

1 C N 1st v v v v × × v v v v v v

Greene et al.(2004)

1 C N 1st v v v v × × v v v v v v

ValdesoloandDeSteno(2006)

1 C B 1st v × × v v

Killgoreet al. (2007)

1 C B 1st v v v v × × v v v v v v

Koenigset al. (2007)

6 C L 1st v v v × × v v v v v v

Greene et al.(2008)

1 C B 1st v v v × × v v v v v v

Fumagalliet al. (2009)

2 + 5 C B 1st v v × v v × × v v v v v v

Fumagalliet al. (2010)

2 + 3 C N 1st v v × v v × × v v v v v v

Justification part of studyRoyzmanand Baron(2002)

7 C B 1 + 3 × × × × × × × × × v v v

Bjorklund(2003)

2 + 5 PP B 1st v × v × v × v v v × v

Cushmanet al. (2006)

3 W B 3rd × × × × v v v

Hauser et al.(2007)

2 + 5 W B 3rd × v × × × × × v v × v

Greene et al.(2009)

1 C B 3rd × v × × × × × × v v v

Lanteri et al.(2008)

7 PP B 3rd × × × × ×

Lombrozo(2009)

3 PP B 3rd × × ×

Studies with lesions or pathologiesKoenigset al. (2007)

6 C L 1st v v v × × v v v v v

Ciaramidaroet al. (2007)

1 C L 1st v × v v v × v v v

Franklinet al. (2009)

1 C L 1st v × v v v × v v v v v

Studies with additional experimental manipulationBjorklund(2003)

2 + 5 PP B 1st v × v × v × v v × v

ValdesoloandDeSteno(2006)

1 C B 1st v v × × × v

Page 10: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

1258

J.F. Christensen,

A.

Gom

ila /

Neuroscience

and Biobehavioral

Review

s 36

(2012) 1249–1264

Table 1 (Continued)

Methodology Variables

Study Type ofquestion*

(Subsec-tion3.8)

Presenta-tionformat**

(Subsec-tion3.1)

Type ofstudy:behav-ioral/neurosci-entific/lesion***

Partici-pantperspec-tive****

(Subsec-tion3.5)

Intentio-nality(Subsec-tion5.1)

Situatio-nalantecedent(Subsec-tion3.6)

Variablesof thevictim(Subsec-tions 4.2and 4.3)

Variablesof the par-ticipant(Subsec-tion4.1)

Kin-/friendshipvariable(Subsec-tion4.3)

Type ofharm/transgres-sion(Subsec-tion5.2)

Numericaloutcome/trade-off(Subsec-tion5.4)

Directnessof harm(Subsec-tion5.3)

Certaintyof events(Subsec-tion5.6)

Normalityofdilemma(Subsec-tion5.5)

In-/outgroup(Subsec-tion4.2)

Selfsacrificecon-trolled?

Wordnumbercount(Sub-section3.4)

Express-ion style(Subsec-tion3.2)

Killgoreet al. (2007)

1 C B 1st v v v v v × v v v v

Young andKoenigs(2007)

3 C N 3rd × × × × × ×

Greene et al.(2008)

1 C B 1st v v v × × v v v v

Young andSaxe (2008)

6 C N 3rd × × × × ×

Cikara et al.(2010)

8 C N 3rd × × × × × × × ×

Studies taking individual differences into accountBartels(2008)

6 PP B 1st v v × v × × × v v v v v

Berthozet al. (2006)

9 C N 1 + 3 × v v × × v ×

Borg et al.(2006)

6 C N 1st × × v × × × × × × ×

Killgoreet al. (2007)

1 C B 1st v v v v v × v v v v v

WaldmannandDieterich(2007)

10 PP B 3rd × v × × × v v v

Hauser et al.(2007)

2 + 5 W B 3rd × v × × × × × v v × ×

Moore et al.(2008)

1 C B 1st v v v v v × × v × × ×

Fumagalliet al. (2009)

2 + 5 C B 1st v v × v v v × v v v v v

Fumagalliet al. (2010)

2 + 3 C N 1st v v × v v v × v v v v v

% explicitly controlled (×)– – – – 40 28 8 32 16 40 84 92 8 4 12 8 20 12% controlled (“not marked“) 52 44 44 52 24 0 8 8 52 52 28 40 24 28% unsystematic (v) 8 28 48 16 60 60 8 0 40 44 60 52 56 60

v = this variable varies unsystematically in the study.× = explicitly controlled for. A blank box means that the variable was not explicitly controlled for but that it didn’t vary unsystematically or that it simply was not relevant for the objectives of the study. The number captions afterthe variable names refer to the respective sections in the paper.

* 1 = appropriateness of the action; 2 = is it right to . . .?; 3 = scale forbidden-permissible; 4 = open response format; 5 = agreement/disagreement; 6 = would you . . .?; 7 = which option is more wrong, morally?; 8 = to what extentis the action morally acceptable?; 9 = post experiment, 10 = should the protagonist of the story take action or not? (six-point Likert scale: definitely not, definitely yes).

** C = computerized task; PP = pen and paper task; W = web-based task.*** B = behavioral study; N = neuroimaging study; L = lesion study.

**** 1 = 1st person perspective; 3 = 3rd person perspective.

Page 11: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

nd Bio

ue

4

gvts(tlil2ottjiat

4

ttmisaeecb2ms

5

cmamMitnfh

5

oaAwtbac

J.F. Christensen, A. Gomila / Neuroscience a

sing the IAT to probe for implicit attitudes of prejudice (Greenwaldt al., 1998; Sabin et al., 2009).

.3. Kinship/friendship

The kinship–friendship variable is related to the former and isrounded in our evolutionary history. Whereas the in/out groupariable relates to social categories, the present variable stateshat harm directed towards family members, friends, or one-elf, has a stronger impact than harm directed towards strangersPetrinovich et al., 1993). A similar effect occurs if harm is directedowards someone we generally like, or towards someone we dis-ike (Miller and Bersoff, 1998). Consequently, many studies onlynclude harm directed towards strangers in their dilemma formu-ation (Hauser et al., 2007; Pizarro et al., 2003; Royzman and Baron,002; Valdesolo and DeSteno, 2006). However, including friendsr family members as part of the dilemma situation – and puttinghe protagonist’s own life at stake – represent another experimen-al manipulation to account for in the analysis. The distinct moraludgments triggered in this way, are explained by the implausiblempersonality of Utilitarianism. This variable can be accounted fors the self–other beneficial distinction in moral dilemma classifica-ion.

.4. Speciesism

The speciesism variable refers to the variation of the speciesowards which harm is directed. Petrinovich et al. (1993) discusshat drowning a dog to save five human individuals might seem less

orally questionable to an experimental participant than drown-ng an innocent fellow human for the same objective. In sometudies on moral judgment with moral dilemmas, the species vari-ble varies unsystematically (Ciaramidaro et al., 2007; Koenigst al., 2007; Royzman and Baron, 2002), leading to uncontrolledffects in the analysis. In most studies, though, this variable is notontrolled for, but at least the species do not vary either within oretween the dilemmas (Killgore et al., 2007; Nichols and Mallon,006; Valdesolo and DeSteno, 2006). The latter is probably a recom-endable strategy if one does not want to study specifically how

peciesism influences moral judgment.

. Dilemma conceptualization

Once the more methodological parameters listed above are heldonstant, conceptual manipulations lead to the design of distinctoral dilemma types that test specific theoretical assumptions

bout human moral cognition. This section covers these key deter-inants of moral judgment discussed in Ethics, Neuroethics andoral Philosophy. So far, empirical evidence is available for (i) the

ntentionality of the action, (ii) the kind of transgression depicted inhe dilemma that leads to the harm of the third party, (iii) the direct-ess with which it is inflicted, (iv) the trade-off of goods and wrongs

or each alternative, (v) the degree of normality of the inflictedarm, and (vi) the certainty of events.

.1. Intentionality

Following the work of Piaget (1932/1965), it is well-known thatur moral psychology is sensitive not only to the consequences ofn action, but also to the intentions behind it (Piaget, 1932/1965).ccidental (or unintended) harm is not considered morally wrong,hile intentional harm is; but also inevitable harm – although inten-

ional, but incurred as a side effect – is at times considered acceptabley experimental participants, as is intended harm if it is to produce

further benefit to the recipient. In general, harm that is the out-ome of an intention to do good may be judged acceptable (Turiel

behavioral Reviews 36 (2012) 1249–1264 1259

et al., 1987; Weiner and Peter, 1973; Zelazo et al., 1996). Thus, thepain a doctor may cause to a patient in her effort to cure her (as inchemotherapy, for instance), is deemed morally acceptable, even ifthe effort is not successful, while the same harm may be judged amoral transgression if it is originated in the mind of a psychopath.What’s more, intention is all that matters in cases of failed attemptsof harm (Young and Saxe, 2009b).

Moral and Legal theory have tried to work out concepts anddistinctions to grasp these qualifications and intuitions. Some ofthe best well-known doctrines were coined by Thomas Aquinas,such as the Doctrine of Doing and Allowing (DDA) (Quinn, 1989). Itstates that it takes more to justify harm derived from action thanfrom inaction, as in the case of active vs. passive euthanasia, becausesuch a difference makes a moral difference. People tend to consentpassive Euthanasia (switching off the vital equipment) but not theactive version (for instance, give the patient a lethal injection). Thisdistinction was also acknowledged in the study by Waldmann andDieterich (2007), which showed that participants are more sen-sitive to the consequences of action than to the consequences ofinaction (agent-patient dichotomy) – an effect also called omissionbias (Ritov, 1990). Waldmann and Dieterich (2007) proposed anagent-patient dichotomy, according to which action omission wouldcause an agent to die, whereas action commission would result in aninnocent person’s death (patient), who otherwise would not havedied (Waldmann and Dieterich, 2007). The participants in this studywere sensitive to this distinction, responding according to the prin-ciples of the DDA. The authors called this phenomenon interventionmyopia. Similarly, Moore et al. (2008) alluded to the same princi-ples when they introduced the avoidable–inevitable variable in theirdilemmas.

On the other hand, the Doctrine of the Double Effect (DDE) (Foot,1967) is the next landmark to consider. In this rationale harm isjustified (or forgiven to a certain extend) if it is a side effect thatresults as an inevitable consequence of an intentional action carriedout for a better overall outcome, even if that side effect was fore-seen and discounted. No wonder there is a lack of consensus in thisregard, many people reject such doctrines (Foot’s use of the trolleydilemma also targeted DDE). However, empirical evidence suggeststhat they actually may be capturing intuitions of our moral psy-chology (Turiel et al., 1987; Weiner and Peter, 1973; Zelazo et al.,1996).

Thus, Hauser et al. (2007) contended that DDE is part of ourmoral competence, pushing intentionality center-stage in the studyof moral judgment. In this approach, the differences in moral judg-ment in the bystander vs. footbridge versions, beyond directness ofharm, are determined by whether the harm is produced inten-tionally or as a non-desired side effect. Participants choose actionomission when the moral transgression implies harm as a meansbut choose action when the harm derived from the moral trans-gression is described as a non-intended side effect (Cushman et al.,2006; Hauser et al., 2007).

Finally, the importance of intentionality in moral dilemma for-mulation is also highlighted by the fact that the presence ofsuch formulations in the text have been found to enhance neu-ral activity in a moral judgment and Theory of Mind (ToM) relatedbrain region, the Right Temporo-Parietal-Junction (RTPJ) (Saxe andPowell, 2006). Thus, this brain region specifically responds to theinclusion of intention information in the formulation, as comparedto when only other social information is included.

In summary, research with moral dilemmas bridges twointeresting research questions related to the behavioral and neu-ral effects triggered by different dimensions of intentionality:

(i) whether intention backs the production of harm (inten-tional/instrumental vs. accidental/incidental), and (ii) whether harmoccurs by doing or by allowing (action omission vs. action commis-sion). Furthermore, here the dimensions of the dilemma outcome
Page 12: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

1 nd Bio

aTho

5

vswiag

gcomaacaosc

e(tHp2haohshedhadddp

ismhaoh

5

dtpb(te

260 J.F. Christensen, A. Gomila / Neuroscience a

nd the relatedness of the story characters also become relevant.hese dimensions are avoidable harm or inevitable harm, on the oneand, and dimensions of the dilemma outcome self-beneficial orther beneficial, on the other.

.2. Kind of transgression

The type of transgression described in a dilemma is a relevantariable that has been found to account for systematic variations inubsequent moral judgments (Burnstein et al., 1994). In researchith moral dilemmas, the harm depicted in the dilemma is usually

ntended to be specifically moral, while conventional harm is bettervoided. This grasps the rationale behind the use of moral dilemmasiven that, without moral harm, there is no moral dilemma.

However, principled accounts of the kinds of moral trans-ressions – beyond the basic distinction between moral andonventional harm (Turiel, 1983) – remain scarce. The Domain The-ry of Moral Development (Turiel, 1983) contends that the humanind responds to two main types of harm: morally questionable

ctions that imply physically harmful consequences for a victim,nd, violations of social conventions that do not have observableonsequences, other than the transgression of socially or culturallydopted norms or rules. Research has mainly focused on harmingthers as the core example of a moral transgression, but while it isurely not the only one, it is not easy to specify how many kindsan be differentiated.

Regarding different types of specifically moral harm Shwedert al. (1997), for instance, proposed three basic moral domainsautonomy, community and divinity) and argued that harmowards these entities trigger differentiable moral judgments.aidt and Graham (2007) fine-tune this distinction further in theirroposal of the Five Foundations of Morality (Haidt and Graham,007). The five domains – or moral intuitions (Haidt, 2001) – arearm/care, fairness/reciprocity, ingroup loyalty, authority/respectnd purity/sanctity (self dignity). According to this view, through-ut evolutionary history, human moral cognition has becomeard-wired to respond to transgressions of any of these five dimen-ions. Of course, individuals may vary as to how these intuitionsave behavioral effects, depending on enculturation and socialducation. Nonetheless, Haidt and Graham suggest that these fiveomains constitute universals of the human moral mind and thatarm directed towards any of them will elicit moral indignationnd condemnation. To test this specifically in studies using moralilemmas, we suggest to approach the Five Foundations Theory as aimensional space and vary the “ingredients” of each possible harmomain in a controlled manner. See the conclusion of the presentaper for details of this proposal.

However, whichever theory about the kinds of human moralntuitions turns out to be correct, for moral dilemma design it isimply important to keep in mind that the type of transgressionatters. Thus, considering the theoretical panorama of possible

armful actions that can be formulated in moral dilemmas, it is challenge for future testing to ascertain exactly to what extentur moral judgment is mediated by variations in these differentarm domains.

.3. Directness of harm

The first conceptual distinction that was made regarding theirectness of harm variable was the personal–impersonal distinc-ion by Greene et al. (2001, 2004). This consideration regarding thehysical proximity between the agent and the produced harm was

ased on the Bystander/Footbridge versions of the trolley dilemmaThomson, 1976). Personal moral dilemmas were defined as thosehat involve bodily harm, deflect an existing threat onto a differ-nt party and befall a particular person or member of a particular

behavioral Reviews 36 (2012) 1249–1264

group of people (Greene et al., 2001). However, this distinction wascriticized for not being clear enough to thoroughly define a dis-tinct feature of human moral cognition (Mikhail, 2007), and Greenehimself later stressed the fact that this distinction was only a pre-liminary proposal of classification (Greene, 2009). It is noteworthy,however, that despite criticisms, there are studies that have ele-gantly shown how participants are sensitive to manipulations ofthis variable (Cushman et al., 2006; Pizarro et al., 2003).

However, more recently, Moore et al. (2008) reformulated thepersonal–impersonal distinction based on a preliminary definitionby Royzman and Baron (2002), as a function of the personal distancewith which the harm is carried out, and found participants sensi-tive to this differentiation (Moore et al., 2008). As a consequence,Greene et al. (2009) introduced a re-shaping of their dilemmas inthis respect and labeled the resultant variable personal force as aspecific experimental factor. This variable now additionally reflectsto what extent the agent’s muscles and whole body are implied inthe execution of the proposed harm (Moore et al., 2011a,b).

5.4. The trade-off

The trade-off variable in the dilemma formulation refers to thebalance of gains and losses that each option produces (Cushmanet al., 2006; Nichols and Mallon, 2006; Waldmann and Dieterich,2007). It concerns how beneficial the alternative outcome has tobe in order to justify committing (or passively consenting to theexecution of) a moral transgression. In other words, how bad doesthe situation have to be for an individual to agree to the alternativeharm situation taking place? In the standard dilemmas, the trade-off is one-to-five deaths, but it could be interesting to investigate ifthere is a turning point (two to one?), were the judgments change.In the same vein, this variable tests how participants respond tothe principles of the philosophical Doctrine of the Double Effect(DDE) or to the omission bias, reflected in the Doctrine of Doingand Allowing (DDA). It may become more complex when the set ofgains/losses involved are not easily measurable or comparable (inthe case of incommensurable goods).

5.5. Normality of harm

Depicted dilemma situations may vary in their degree of nor-mality to a particular subject population. A combat situation mightbe familiar to a soldier, but not to an undergraduate psychologystudent. Similarly, an abortion is likely to be more a part of a youngWestern girl’s world than the suffocation of a newborn baby to stopit crying in a situation of war where enemy soldiers are approach-ing your house (Bjorklund, 2003; Borg et al., 2006; Greene et al.,2009; O’Neill and Petrinovich, 1998).

One solution to this bias is to ask the participants to rate how“normal” the depicted dilemma situation appears to them (afterjudging it) and to use this information as a covariate in the dataanalysis as did Greene et al. (2009). Another possible solution is toinclude an allusion to this problem in the instruction to the par-ticipants. O’Neill and Petrinovich (1998) and Greene et al. (2009)included a short clause explaining that some of the dilemmas mightsound unreal to the participants but that, in spite of this, there wereserious philosophical reasons to include them. Participants wereasked to ignore this unrealistic nature of some of the dilemmas andto concentrate only on the judgment.

A third possibility is to control for the normality of the dilemmasituation from the outset; either keeping normality constant across

the stimuli or varying it systematically. The latter solution is obvi-ously not applicable in case the goal is to carry out a cross-culturalstudy, where “normality of harm” might be a rather relative vari-able. While cultural differences should be taken into account for all
Page 13: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

nd Bio

te

iulOmtaosoiitcta

jaeuvdctmajcev

ptwditbsinao

5

acDttfedl

iwpsi

J.F. Christensen, A. Gomila / Neuroscience a

he variables mentioned in this review, in this particular case it isspecially pressing.

A fourth, maybe more holistic, way to address this challenges to start out by focusing on transgressions that happen to beniversally considered as such across different cultures, such as

ife-and-death transgressions (Hollos et al., 1986; Song et al., 1987).bviously, the question about the “normality of harm” relates to aore general discussion about the ecological validity and justifica-

ion of moral dilemmas: while some authors merely suggest thatn effort should generally be made to enhance ecological validityf stimuli in studies on moral cognition without giving any specificuggestions about how this could be achieved (Casebeer, 2003),thers express serious doubts about any use of moral dilemma setsn studies on moral judgment, specifically due to their heterogene-ty in so many design variables (McGuire et al., 2009). We agree withhe latter in that the design variables of moral dilemmas should bearefully controlled, and we believe this can be achieved; in fact,hat’s precisely the thrust of the present paper. The first remarkbout ecological validity, however, deserves a second look.

Ecological validity, yes, but of what kind? As pointed out, moraludgments to hypothetical moral dilemmas allow us to obtain valu-ble information about underlying psychological processes (Mooret al., 2011a,b) and we believe that, specifically, “far-away” sit-ations involving serious moral transgressions are particularlyaluable in serving that aim. They allow us to control for non-esired previous differential exposure effects – or to specificallyontrol the level of previous exposure to extreme situations, ando investigate what effect this has on moral judgment. By choosing

ore extreme situations more reliable conclusions can be drawnbout the underlying moral intuitions that trigger the distinct moraludgments. We believe this is a valuable strategy for avoidingonfounds, a position also acknowledged by other authors (Borgt al., 2006). Thus, dilemmas foster, rather than diminish, ecologicalalidity.

To conclude this section, we articulate a methodological pro-osal which might help to round off this eternal discussion abouthe legitimacy of moral dilemmas as experimental stimuli. In lineith the brief discussion on the use of moral dilemmas in the intro-

uctory section of the present paper, we propose to include somendications about the kind of stimuli in the instructions to the par-icipants to make them more easily engage in the task. For instance,y informing them that the dilemmas they are about to read areimilar to those that are likely to appear as press releases on TV,n the newspaper, and on the radio, or which might be a plot in aovel or a movie: in the following you will read a series of short storiesbout difficult interpersonal situations, similar to those that we all seen the news every day or may read about in a novel . . ..

.6. Certainty of events

In real-life scenarios, full certainty about what will happen,nd whether or not a moral transgression would or not actuallyhange the course of events, is the exception rather than the rule.ilemma stories, however, are written in an explicit style that

ries to block out any alternative course of action that could occurhrough unforeseen events and which would spare the participantrom making the fateful choice. However, it is possible that thexperimental participant still feels that the course of events asepicted is unlikely, in spite of such explicit inevitability formu-

ations.A possible strategy to cope with such uncertainty is to take it

nto account as a covariate in data analysis as the probability with

hich the participant believes that the moral transgression wouldrevent the first depicted harm to happen. Can the body of a per-on stop a trolley? Why not jumping oneself, then? (As discussedn Subsection 4.3 regarding the kinship variable). Why not try to

behavioral Reviews 36 (2012) 1249–1264 1261

alert the potential victims to quit the track? Some studies havepointed out the relevance of this variable by asking their partic-ipants to rate this probability and to incorporate it in their dataanalysis (Royzman and Baron, 2002), or again by merely includingan allusion to this problem in the instructions, simply asking theparticipant to ignore it (Greene et al., 2009). We believe, howeverthat the certainty of events is a crucial variable to account for. It isvery relevant for making a moral judgment and for the attributionof moral responsibility to know whether there are alternatives tothe proposed course of action leading to the moral transgression.In the moment that there is uncertainty of the course of events, theobjective of eliciting moral judgments based on the inevitability ofa situation is corrupted. If an experimental participant sees a sit-uation as inevitably leading to a moral transgression she might bemore inclined to consenting it – a position acknowledged also inmany legal systems: if committed harm was inevitable, the legaljudgment of the action becomes less severe. However, if the par-ticipants are confused by the formulation of the dilemma becauseit leaves room for speculation about possible alternatives, they willbe judging a different scenario, no matter how explicitly they areasked to ignore the alternatives. Hence, ideally, either such uncer-tainty in the formulation should be kept to a minimum, or it shouldbe manipulated as another independent variable.

6. Schematic overview of 25 studies in relation to the 19design parameters

This review provides a review and discussion of 19 experimentaldesign parameters that are relevant in moral dilemma tasks, bothfor behavioral and neuroscientific approaches to moral psychology.Unfortunately, research to date has not been successful in manip-ulating them individually while controlling for the rest. Table 1graphically summarizes which variables have received attention bya selection of 25 different studies appeared in peer-reviewed jour-nals mainly in the past 10 years. In these 25 studies reported in thetable (nine of which are reported twice because they belong to twodistinct categories on the vertical axis) we observe the followingdistribution.

In the methodology section, there is a striking heterogeneityin terms of the question type in the studies. Between the 25 stud-ies, there are at least 10 different question types. The presentationformat (see Subsection 3.1) is mainly computerized (72% of thestudies), followed by the pen-and-paper versions (20%) and theweb-based modality (8%). Fifty-six percent of the studies are behav-ioral, 32% are neuroimaging, and 12% lesion studies. As to theparticipant perspective, 56% of the studies used the first person per-spective, 36% the third person perspective and 8% combined the two(see Subsection 3.8 for a discussion of the different perspectives).

In relation to the remaining variables we may summarize someof the most important issues as follows.

Seven of the 25 studies included justifications as a part of theirexperimental paradigm (Subsection 3.9). Less than half of the stud-ies controlled explicitly for the variable of intentionality (40%), inspite of the importance of this variable in moral judgment forma-tion (see Subsection 5.1). However, most of the other studies havesimply avoided any reference to intentions in their dilemma formu-lation. Conversely, the directness of harm variable was the one mostsystematically controlled for (Subsection 5.3). Ninety-two percentof the studies take this variable into account. Also the trade-off hasreceived much attention. Sixty-four percent of the studies controlexplicitly for this variable but 28% of the studies fail to control for it,

possibly biasing their results (see Subsection 5.4). The type of trans-gression varies unsystematically in 40% of the studies, whereas 60%control for it. Forty percent is a lot, considering that this meansthat the authors have been comparing judgments of social norm
Page 14: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

1 nd Bio

tsovwcvrcpgceeacS

7

7

mumwift

staGMKiqwaaMA(sai

7h

avjtbmvtafss

262 J.F. Christensen, A. Gomila / Neuroscience a

ransgressions such as lying, stealing etc. with serious bodily harmuch as rape, injure or killing (see Subsection 5.2 for a discussionn the difference between the types of harm). The Kin-/friendshipariable varies unsystematically in 56% of the studies, which isorrying considering that the 16% of studies who systematically

ontrol for this variable report consistently how important thisariable is for moral decision-making as individuals appear to favorescuing themselves, their families and friends if they have thehoice between them and strangers (see Subsection 4.3). A similarlyreoccupying pattern emerges for the related variable of in-/outroup: in 60% of the studies it varies unsystematically and only 16%ontrol for it (Subsection 4.2). Furthermore, in spite of their rel-vance (see Subsections 5.5 and 5.6), the variables of certainty ofvents and the normality of situations are hardly controlled for. It islso striking, how few studies have controlled for the word numberount (60% did not) or the expression style (60% did not) (see theubsections 3.4 and 3.2 respectively).

. Conclusion and discussion

.1. Conclusion

Our main conclusion derived from this review is that usingoral dilemmas in Neuroethics has much to contribute to our

nderstanding of human moral psychology. We advocate that aoral dilemma should be understood as an experimental stimulusith a clear structure composed of a series of design parameters –

ndependent variables – that have to be identified and then care-ully controlled for in all empirical studies of moral judgment usinghis methodology.

We have specifically not aimed to make a review of the neuro-cientific findings of studies on moral cognition. For this, we referhe interested reader to a series of very thoroughly crafted reviewslready available in the literature (Casebeer, 2003; Dean, 2009;reene and Haidt, 2002; Hauser and Young, 2008; Moll et al., 2003;oll and Schulkin, 2009; Woodward and Allman, 2007; Young and

oenigs, 2007). Rather, we have offered a principled account of thessues involved in using this methodology, identifying a number ofuestions which have barely received experimental attention, asell as others for which somewhat firmer empirical grounds are

vailable. For progress at the theoretical level, the currently mostrticulated proposals in this area – the Dual Process Hypothesis oforal Judgment (DPHMJ) (Greene et al., 2001), the Motivational

pproach (Moll et al., 2008a,b), and the Five Foundations AccountHaidt and Graham, 2007) – can be developed in this regard, toee how they fare when this framework of parameters is taken intoccount, and whether they make different predictions on particularssues that can then be put to the test.

.2. Future directions: a theoretical framework for workingypotheses

The Dual Process Hypothesis of Moral Judgment (DPHMJ) isn example of a model of human morality that attempts to pro-ide an account of what it is that makes us diverge in our moraludgments (Greene, 2008; Greene et al., 2001). It posits that bothop–down reasoning processes of a more cognitive nature andottom–up emotionally triggered processes interact in moral judg-ent formation. However, this appears as a rather too dichotomic

iew of the processes involved in view of the evidence reportedhroughout this review, considering the wealth of contextual

nd conceptual variables whose importance in moral judgmentormation is undeniable. The DPHMJ model does not explicitly con-ider these variables. It thus ignores important psychological andocial processes. Nevertheless, the DPHMJ has provided a valuable

behavioral Reviews 36 (2012) 1249–1264

starting point for building an integrative model of human moraljudgment.

In 2008, Moll and colleagues proposed a Motivational Approachon how moral intuitions, moral emotions and values might be inter-acting to motivate the unique phenomenon of morality in humancognition. One of their main claims is that our social behavior (ormorality) is determined by biological predispositions (intuitions)that trigger moral emotions – which are of key importance formodulating subsequent behaviors. These predispositions evolvedduring human evolutionary history as motivational forces to fosterprosocial behavior, paralleling the demands of a life in increasinglylarge social groups (Moll et al., 2008a,b). While these authors donot specify in detail what these biological predispositions might be,considering the work by Haidt and colleagues may complement thepicture. The Foundations approach of human morality (Haidt andGraham, 2007) suggests five such basic predispositions: harm/care,fairness/justice, ingroup loyalty, authority/respect, purity/sanctity.According to this approach, through a complex interplay of social,cognitive and emotional processes, each of these “foundations”condition human moral behavior, and, a judgment that an individ-ual makes – in psychological terms – is also a type of behavior.Accordingly, these three models have the potential to serve as atheoretical framework for future studies on moral judgment withmoral dilemmas. In the foundations view, transgressions towardsany of the five intuitions trigger moral emotions in the individual,and, according to the motivational view, these will motivate onetype of behavior (here: judgment) or another. The judgment willdepend on a variety of parameters of the situation – as discussed inthis review – which again are thoroughly covered by the founda-tions view. In consequence, we believe that the Five FoundationsModel of morality and the Motivational Approach, interwoven withavailable evidence on emotions and moral emotions (for instance,provided by the DPHMJ), and on how they interact to conditionbehavior, constitute a valuable theoretical framework for futurestudies on moral judgment.

Based on the evidence derived from studies with moral dilem-mas that vary in methodological and conceptual design parametersdiscussed in this review, we suggest that moral dilemmas, specif-ically, are a highly valuable tool for assessing human moralcognition. It is the possibility of varying so systematically the vari-ables implied in a situation that makes moral dilemmas such aprofitable experimental paradigm in empirical and neuroscientificstudies of moral psychology. We hope that this review and the sug-gestions based on it will make it possible to fully exploit that valuein future research.

Acknowledgements

The study was funded by the research project FFI2010-20759(Spanish Ministry of Science and Innovation), and by the Chair ofthe Three Religions (Government of the Balearic Islands) of the Uni-versity of the Balearic Islands, Spain. Julia Frimodt Christensen wassupported by a FPU PhD scholarship from the Spanish Ministry ofEducation (AP2009-2889). A special thank you goes to Dr MarcusPearce and Dr Marcos Nadal for very helpful comments on previousdrafts of the manuscript. We would also like to thank the refereesfor their very valuable comments.

References

Bartels, D.M., 2008. Principled moral sentiment and the flexibility of

moral judgment and decision making. Cognition 108 (2), 381–417,doi:10.1016/j.cognition.2008.03.001.

Berthoz, S., Armony, J.L., Blair, R.J.R., Dolan, R.J., 2002. An fMRI study of inten-tional and unintentional (embarrassing) violations of social norms. Brain 125,1696–1708.

Page 15: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

nd Bio

B

B

B

B

C

C

C

C

C

C

C

C

D

D

E

F

F

F

F

F

F

F

F

F

G

G

G

G

J.F. Christensen, A. Gomila / Neuroscience a

erthoz, S., Grezes, J., Armony, J.L., Passingham, R.E., Dolan, R.J., 2006. Affec-tive response to one’s own moral violations. NeuroImage 31 (2), 945–950,doi:10.1016/j.neuroimage.2005.12.039.

jorklund, F., 2003. Differences in the justification of choices in moral dilemmas:effects of gender, time pressure and dilemma seriousness. Scandinavian Journalof Psychology 44 (5), 459–466.

org, J.S., Hynes, C., Van Horn, J., Grafton, S., Sinnott-Armstrong, W.,2006. Consequences, action, and intention as factors in moral judg-ments: an fMRI investigation. Journal of Cognitive Neuroscience 18 (5),803–817.

urnstein, E., Crandall, C., Kitayama, S., 1994. Some neo-darwinian decision rulesfor altruism – weighing cues for inclusive fitness as a function of the biologicalimportance of the decision. Journal of Personality and Social Psychology 67 (5),773–789.

acioppo, J.T., Petty, R.E., Kao, C.F., 1984. The efficient assessment of needfor cognition. Journal of Personality Assessment 48 (3), 306–307,doi:10.1207/s15327752jpa4803 13.

asebeer, W.D., 2003. Moral cognition and its neural constituents. Nature ReviewsNeuroscience 4 (10), 840–846, doi:10.1038/nrn1223.

iaramidaro, A., Adenzato, M., Enrici, I., Erk, S., Pia, L., Bara, B.G., et al., 2007. The inten-tional network: how the brain reads varieties of intentions. Neuropsychologia45 (13), 3105–3113, doi:10.1016/j.neuropsychologia.2007.05.011.

ikara, M., Farnsworth, R.M., Harris, L.T., Fiske, S.T., 2010. On the wrong side of thetrolley track: neural correlates of relative social valuation. Social Cognitive andAffective Neuroscience 5 (4), 404–413, doi:10.1093/scan/nsq011.

rowne, D.P., Marlowe, D., 1960. A new scale of social desirability indepen-dent of psychopathology. Journal of Consulting Psychology 24 (4), 349–354,doi:10.1037/h0047358.

ushman, F., 2008. Crime and punishment: distinguishing the roles of causaland intentional analyses in moral judgment. Cognition 108 (2), 353–380,doi:10.1016/j.cognition.2008.03.006.

ushman, F., Young, L., Hauser, M., 2006. The role of conscious reasoning and intu-ition in moral judgment: testing three principles of harm. Psychological Science17 (12), 1082–1089.

vencek, D., Greenwald, A.G., Brown, A.S., Gray, N.S., Snowden, R.J., 2009.Faking of the implicit association test is statistically detectable andpartly correctable. Basic and Applied Social Psychology 32 (4), 302–314,doi:10.1080/01973533.2010.519236.

amasio, A.R., 1995. Consciousness – knowing how, knowing where. Nature 375(6527), 106–107.

ean, R., 2009. Does neuroscience undermine deontological theory? Neuroethics 3(1), 43–60, doi:10.1007/s12152-009-9052-x.

dwards, A.L., Horst, P., 1953. Social desirability as a variable in l tech-nique studies. Educational and Psychological Measurement 13 (4), 620–625,doi:10.1177/001316445301300409.

arrer, C., Frith, C.D., 2002. Experiencing oneself vs another person as being thecause of an action: the neural correlates of the experience of agency [Article].NeuroImage 15 (3), 596–603, doi:10.1006/nimg.2001.1009.

erstl, E.C., Neumann, J., Bogler, C., von Cramon, D.Y., 2008. The extended languagenetwork: a meta-analysis of neuroimaging studies on text comprehension.Human Brain Mapping 29 (5), 581–593, doi:10.1002/hbm.20422.

inger, E.C., Marsh, A.A., Kamel, N., Mitchell, D.G.V., Blair, J.R., 2006. Caughtin the act: the impact of audience on the neural response to morallyand socially inappropriate behavior. NeuroImage 33 (1), 414–421,doi:10.1016/j.neuroimage.2006.06.011.

iske, S.T., Cuddy, A.J.C., Glick, P., 2007. Universal dimensions of social cogni-tion: warmth and competence. Trends in Cognitive Sciences 11 (2), 77–83,doi:10.1016/j.tics.2006.11.005.

iske, S.T., Cuddy, A.J.C., Glick, P., Xu, J., 2002. A model of (often mixed) stereotypecontent: competence and warmth respectively follow from perceived statusand competition. Journal of Personality and Social Psychology 82 (6), 878–902,doi:10.1037//0022-3514.82.6.878.

oot, P., 1978. The problem of abortion and the doctrine of the double effect. In:Reprinted in Virtues and Vices and Other Essays in Moral Philosophy. Blackwell,Oxford, pp. 19–32.

ranklin, S.A., McNally, R.J., Riemann, B.C., 2009. Moral reasoning in obsessive-compulsive disorder [Article]. Journal of Anxiety Disorders 23 (5), 575–577,doi:10.1016/j.janxdis.2008.11.005.

umagalli, M., Ferrucci, R., Mameli, F., Marceglia, S., Mrakic-Sposta, S., Zago, S., et al.,2009. Gender-related differences in moral jugdments. Cognitive Processing 11(3), 219–226, doi:10.1007/s10339-009-0335-2.

umagalli, M., Vergari, M., Pasqualetti, P., Marceglia, S., Mameli, F., Ferrucci, R., et al.,2010. Brain switches utilitarian behavior: does gender make the difference? PlosOne 5 (1), doi:10.1371/journal.pone.0008865.

omila, A., 2007. Suicide terrorists, neuroscience, and morality: taking complexitiesinto account. In: Vilarroya, O., Forn i Argimon, F. (Eds.), Social Brain Matters. ,1st ed. Radopi, Amsterdam.

omila, A., Calvo, P., 2010. Understanding brain circuits and their dynamics. Behav-ioral and Brain Sciences 33, 274–275.

raham, J., Haidt, J., Nosek, B.A., 2009. Liberals and conservatives rely on differentsets of moral foundations. Journal of Personality and Social Psychology 96 (5),

1029–1046, doi:10.1037/a0015141.

reene, J., 2008. The secret Joke of Kant’s Soul. In: Sinnott-Armstrong, W. (Ed.), MoralPsychology, Vol. 3. MIT Press, Cambridge, Massachusetts; London, England, pp.35–80.

behavioral Reviews 36 (2012) 1249–1264 1263

Greene, J.D., 2009. Dual-process morality and the personal/impersonal distinction:a reply to McGuire, Langdon, Coltheart, and Mackenzie. Journal of ExperimentalSocial Psychology 45 (3), 581–584, doi:10.1016/j.jesp.2009.01.003.

Greene, J., Haidt, J., 2002. How (and where) does moral judgment work? Trends inCognitive Sciences 6 (12), 517–523.

Greene, J., Sommerville, B., Nystrom, L., Darley, J., Cohen, J., 2002. Cognitiveand affective conflict in moral judgment. Journal of Cognitive Neuroscience,49.

Greene, J.D., Cushman, F.A., Stewart, L.E., Lowenberg, K., Nystrom, L.E.,Cohen, J.D., 2009. Pushing moral buttons: the interaction between per-sonal force and intention in moral judgment. Cognition 111 (3), 364–371,doi:10.1016/j.cognition.2009.02.001.

Greene, J.D., Morelli, S.A., Lowenberg, K., Nvstrom, L.E., Cohen, J.D., 2008. Cognitiveload selectively interferes with utilitarian moral judgment. Cognition 107 (3),1144–1154, doi:10.1016/j.cognition.2007.11.004.

Greene, J.D., Nystrom, L.E., Engell, A.D., Darley, J.M., Cohen, J.D., 2004. The neu-ral bases of cognitive conflict and control in moral judgment. Neuron 44 (2),389–400.

Greene, J.D., Sommerville, R.B., Nystrom, L.E., Darley, J.M., Cohen, J.D., 2001. An fMRIinvestigation of emotional engagement in moral judgment. Science 293 (5537),2105–2108.

Greenwald, A.G., McGhee, D.E., Schwartz, J.L.K., 1998. Measuring individual differ-ences in implicit cognition: the implicit association test. Journal of Personalityand Social Psychology 74 (6), 1464–1480, doi:10.1037/0022-3514.74.6.1464.

Greenwald, A.G., Nosek, B., Banaji, M.R., 2003. Understanding and using the implicitassociation test: 1. An improved scoring algorithm. Journal of Personality andSocial Psychology 85, 197–216.

Grice, H.P., 1975. Logic and conversation. In: Cole, P., Morgan, J.L. (Eds.), Syntax andSemantics 3: Speech Acts. Academic Press, New York.

Haidt, J., 2001. The emotional dog and its rational tail: a social intuitionist approachto moral judgment. Psychological Review 108 (4), 814–834, doi:10.1037//0033-295x.108.4.814.

Haidt, J., 2003. Elevation and the positive psychology of morality. In: Keyes, C.L.,Haidt, J. (Eds.), Flourishing: Positive Psychology and the Life Well-Lived. Amer-ican Psychological Association, Washington, DC, pp. 275–289.

Haidt, J., Graham, J., 2007. When morality opposes justice: conservatives have moralintuitions that liberals may not recognize. Social Justice Research 20 (1), 98–116.

Hashimoto, R., Sakai, K.L., 2002. Specialization in the left prefrontal cortexfor sentence comprehension. Neuron 35 (3), 589–597, doi:10.1016/s0896-6273(02)00788-2.

Hauser, M., Cushman, F., Young, L., Jin, R.K.X., Mikhail, J., 2007. A dissociationbetween moral judgments and justications. Mind & Language 22 (1), 1–21.

Hauser, M.D., Young, L., 2008. Modules, minds and morality. In: Pfaff, D., Kordon, C.,Chanson, P., Christen, Y. (Eds.), Hormones and Social Behavior. , pp. 1–11.

Heekeren, H.R., Wartenburger, I., Schmidt, H., Schwintowski, H.P., Villringer, A.,2003. An fMRI study of simple ethical decision-making. Neuroreport 14 (9),1215–1219, doi:10.1097/01.wnr.0000081878.45938.a7.

Hewstone, M., Rubin, M., Willis, H., 2002. Intergroup bias. Annual Review of Psy-chology 53, 575–604.

Hofmann, W., Baumert, A., 2010. Immediate affect as a basis for intuitive moraljudgement: an adaptation of the affect misattribution procedure. Cognition &Emotion 24 (3), 522–535, doi:10.1080/02699930902847193.

Hollos, M., Leis, P., Turiel, E., 1986. Social reasoning in Ijo children and adolescentsin Nigerian communities. Journal of Cross-Cultural Psychology 17 (3), 352–374,doi:10.1177/0022002186017003007.

Karniol, R., 1978. Children’s use of intention cues in evaluation behavior. Psycholog-ical Bulletin 85, 76–85.

Keasey, C.B., 1978. Children’s developing awareness and usage of intentionalityand motive. In: Keasey, C.B. (Ed.), Nebraska Symposium on Motivation, Vol. 25.University of Nebraska Press, Lincoln, pp. 219–260.

Killgore, W.D.S., Killgore, D.B., Day, L.M., Li, C., Kamimori, G.H., Balkin, T.J., 2007.The effects of 53 hours of sleep deprivation on moral judgment. Sleep 30 (3),345–352.

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., et al., 2007.Damage to the prefrontal cortex increases utilitarian moral judgements. Nature446 (7138), 908–911, doi:10.1038/nature05631.

Kohlberg, L., 1964. Development of moral character and moral ideology. In: Hoffman,M.L., Hoffman, L.W. (Eds.), Review of Child Development Research, Vol. 1. RussellSage Foundation, New York.

Kraut, R., Olson, J., Banaji, M.R., Cohen, J., Cooper, M., 2004. Psychologicalresearch online. American Psychologist 59 (2), 105–117, doi:10.1037/0003-066x.59.2.105.

Lanteri, A., Chelini, C., Rizzello, S., 2008. An experimental investigation of emotionsand reasoning in the trolley problem. Journal of Business Ethics 83 (4), 789–804,doi:10.1007/s10551-008-9665-8.

Lombrozo, T., 2009. The role of moral commitments in moral judgment. CognitiveScience 33 (2), 273–286, doi:10.1111/j. 1551-6709.2009.01013.x.

Love, T., Haist, F., Nicol, J., Swinney, D., 2006. A functional neuroimaging investigationof the roles of structural complexity and task-demand during auditory sentenceprocessing. Cortex 42 (4), 577–590, doi:10.1016/s0010-9452(08)70396-4.

McGuire, J., Langdon, R., Coltheart, M., Mackenzie, C., 2009. A reanalysis of the

personal/impersonal distinction in moral psychology research. Journal of Exper-imental Social Psychology 45 (3), 577–580, doi:10.1016/j.jesp.2009.01.002.

Mikhail, J., 2007. Universal moral grammar: theory, evidence and the future. Trendsin Cognitive Sciences 11 (4), 143–152, doi:10.1016/j.tics.2006.12.007.

Page 16: Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review

1 nd Bio

M

M

M

M

M

M

M

M

M

M

NN

N

N

O

O

P

P

P

P

PP

Q

R

R

R

R

264 J.F. Christensen, A. Gomila / Neuroscience a

iller, J.G., Bersoff, D.M., 1998. The role of liking in perceptions of the moral respon-sibility to help: a cultural perspective. Journal of Experimental Social Psychology34 (5), 443–469.

oll, J., de Oliveira-Souza, R., Bramati, I.E., Grafman, J., 2002. Functional networks inemotional moral and nonmoral social judgments. NeuroImage 16 (3), 696–703,doi:10.1006/nimg.2002.1118.

oll, J., de Oliveira-Souza, R., Eslinger, P.J., 2003. Morals and thehuman brain: a working model. Neuroreport 14 (3), 299–305,doi:10.1097/01.wnr.0000057866.05120.28.

oll, J., de Oliveira-Souza, R., Moll, F.T., Ignacio, F.A., Bramati, I.E., Caparelli-Daquer,E.M., et al., 2005. The moral affiliations of disgust – a functional MRI study.Cognitive and Behavioral Neurology 18 (1), 68–78.

oll, J., de Oliveira-Souza, R., Zahn, R., 2008a. The neural basis of moral cognition– sentiments, concepts, and values. Year in Cognitive Neuroscience 2008 1124,161–180, doi:10.1196/annals.1440.005.

oll, J., Oliveira-Souza, R., Zahn, R., Grafman, J., 2008b. The cognitive neuroscienceof moral emotions. In: Sinnott-Armstrong (Ed.), Moral Psychology, Vol. 3. MITPress, Cambridge, Massachusetts; London, England, pp. 1–17.

oll, J., Schulkin, J., 2009. Social attachment and aversion in human moralcognition. Neuroscience and Biobehavioral Reviews 33 (3), 456–465,doi:10.1016/j.neubiorev.2008.12.001.

oore, A.B., Clark, B.A., Kane, M.J., 2008. Who shalt not kill? Individual differencesin working memory capacity, executive control, and moral judgment. Psycho-logical Science 19 (6), 549–557.

oore, A.B., Lee, N.Y.L., Clark, B.A.M., Conway, A.R.A., 2011a. In defence of thepersonal/impersonal distinction in moral psychology research: cross-culturalvalidation of the dual process model of moral judgment. Judgment and DecisionMaking 6 (3), 186–195.

oore, A.B., Stevens, J., Conway, A.R.A., 2011b. Individual differences in sensitivityto reward and punishment predict moral judgment. Personality and IndividualDifferences 50 (5), 621–625, doi:10.1016/j.paid.2010.12.006.

elson, T.D. (Ed.), 2006. The Psychology of Prejudice. , 2nd ed. Allyn & Bacon, Bosten.ichols, S., Knobe, J., 2007. Moral responsibility and determinism: the cognitive

science of folk intuitions. Nous 41 (4), 663–685, doi:10.1111/j. 1468-0068.2007.00666.x.

ichols, S., Mallon, R., 2006. Moral dilemmas and moral rules. Cognition 100 (3),530–542, doi:10.1016/j.cognition.2005.07.005.

osek, B., Banaji, M.R., Greenwarld, A.G., 2002. Harvesting intergroup attitudes andstereotypes from a demostration website. Group Dynamics 6, 101–115.

’Hara, R.E., Sinnott-Armstrong, W., Sinnott-Armstrong, N.A., 2010. Wording effectsin moral judgments. Judgment and Decision Making 5 (7), 547–554.

’Neill, P., Petrinovich, L., 1998. A preliminary cross-cultural study of moral intu-itions. Evolution and Human Behavior 19 (6), 349–367.

eelle, J.E., McMillan, C., Moore, P., Grossman, M., Wingfield, A., 2004. Dissocia-ble patterns of brain activity during comprehension of rapid and syntacticallycomplex speech: evidence from fMRI. Brain and Language 91 (3), 315–325,doi:10.1016/j.bandl.2004.05.007.

etrinovich, L., O’Neill, P., 1996. Influence of wording and framing effects on moralintuitions. Ethology and Sociobiology 17 (3), 145–171.

etrinovich, L., O’Neill, P., Jorgensen, M., 1993. An empirical-study of moral intuitions– toward an evolutionary ethics. Journal of Personality and Social Psychology 64(3), 467–478.

hillips, S.T., Ziller, R.C., 1997. Toward a theory and measure of the nature of non-prejudice. Journal of Personality and Social Psychology 72, 420–432.

iaget, J. (Ed.), 1932/1965. The Moral Judgment of the Child. Free Press, New York.izarro, D.A., Uhlmann, E., Bloom, P., 2003. Causal deviance and the attribution of

moral responsibility. Journal of Experimental Social Psychology 39 (6), 653–660,doi:10.1016/s0022-1031(03)00041-6.

uinn, W., 1989. Actions, intentions and the doctrine of doing and allowing. Philo-sophical Review 98, 287–312.

abbie, J.M., 1981. The effects of intergroup competition and cooperation on intra-and intergroup relationships. In: Grzelak, J., Derlega, V. (Eds.), Living with OtherPeople: Theory and Research on Cooperation and Helping. Academic Press, NewYork.

ichman, W.L., Kiesler, S., Weisband, S., Drasgow, F., 1999. A meta-analytic studyof social desirability distortion in computer-administered questionnaires, tra-ditional questionnaires, and interviews. Journal of Applied Psychology 84 (5),

754–775, doi:10.1037/0021-9010.84.5.754.

itov, I.B.J., 1990. Reluctance to vaccinate: omission bias and ambiguity. Journal ofBehavioral Decision Making 3, 263–277.

oyzman, E., Baron, J., 2002. The preference of indirect harm. Social Justice Research15 (2), 165–184.

behavioral Reviews 36 (2012) 1249–1264

Sabin, J.A., Nosek, B.A., Greenwald, A.G., Rivara, F.P., 2009. Physicians’ implicit andexplicit attitudes about race by MD race, ethnicity, and gender. Journal of HealthCare for the Poor and Underserved 20 (3), 896–913.

Saxe, R., Powell, L.J., 2006. It’s the thought that counts: specific brain regionsfor one component of theory of mind. Psychological Science 17 (8), 692–699,doi:10.1111/j. 1467-9280.2006.01768.x.

Schnabel, K., Asendorpf, J.B., Greenwald, A.G., 2008a. Assessment of individual dif-ferences in implicit cognition. A review of IAT measures. European Journal ofPsychological Assessment 24 (4), 210–217, doi:10.1027/1015-5759.24.4.210.

Schnabel, K., Asendorpf, J.B., Greenwald, A.G., 2008b. Using implicit associationtests for the assessment of implicit personality self-concepts. In: Boyle, G.j.,Matthews, G., Saklofske, D.H. (Eds.), Handbook of Personality Theory and Testing.Sage, London, pp. 508–528.

Shaver, K.G. (Ed.), 1985. The Attribution of Blame: Causality, Responsibility, andBlameworthiness. Springer-Verlag, New York.

Shweder, R.M.N., Mahapatra, M., Park, L., 1997. The ‘Big Three’ of morality (auton-omy, community, divinity) and the ‘Big Three’ explanations of suffering. In:Rozin, A.M.B.P. (Ed.), Morality and Health. Routledge, Oxford, pp. 119–169.

Song, M., Smetana, J., Kim, S., 1987. Korean children’s conceptions of moral andconventional transgressions. Developmental Psychology 23, 577–582.

Sorensen, R.A., 1988. Blindspots. Oxford University Press, Oxford.Styron, T. (Ed.), 1979. Sophie’s Choice. Random House, New York.Takahashi, H., Yahata, N., Matsuda, T., Asai, K., Okubo, Y., 2004. Brain activation

associated with evaluative processes of guilt and embarassment: an fMRI study.NeuroImage 23, 967–974.

Takezawa, M., Gummerum, M., Keller, M., 2006. A stage for the rational tail of theemotional dog: roles of moral reasoning in group decision making. Journal ofEconomic Psychology 27 (1), 117–139, doi:10.1016/j.joep.2005.06.012.

Tangney, J.P., Stuewig, J., Mashek, D.J., 2007. Moral emotions andmoral behavior. Annual Review of Psychology 58, 345–372,doi:10.1146/annurev.psych.56.091103.070145.

Thaler, R., 2008. Mental accounting and consumer choice. Marketing Science 27,15–25.

Thomson, J.J., 1976. Killing, letting die, and the trolley problem. The Monist 59,204–217.

Turiel, E. (Ed.), 1983. The Development of Social Knowledge: Morality & Convention.Cambridge University Press, New York.

Turiel, E., Killen, M., Helwig, C., 1987. Morality: its structure, functions, and vagaries.In: Kagan, J., Lamb, S. (Eds.), The Emergence of Morality in Young Children.University of Chicago Press, Chicago, pp. 155–243.

Tversky, A., Kahneman, D., 1981. The framing of decisions and the psychology ofchoice. Science 211 (4481), 453–458, doi:10.1126/science.7455683.

Valdesolo, P., DeSteno, D., 2006. Manipulations of emotional context shape moraljudgment. Psychological Science 17 (6), 476–477.

Waldmann, M.R., Dieterich, J.H., 2007. Throwing a bomb on a person versus throwinga person on a bomb – intervention myopia in moral intuitions. PsychologicalScience 18 (3), 247–253.

Wang, X.T., 1996. Evolutionary hypotheses of risk-sensitive choice: age differencesand perspective change. Ethology and Sociobiology 17 (1), 1–15.

Weiner, B., Peter, N., 1973. A cognitive-developmental analysis of achievement andmoral judgments. Developmental Psychology 9, 290–309.

Williams, B., 1973. A critique of utilitarianism. In: Smart, J.J.C., Williams, B. (Eds.),Utilitarianism: For and Against. Cambridge University Press, Cambridge.

Woodward, J., Allman, J., 2007. Moral intuition: its neural substrates andnormative significance. Journal of Physiology-Paris 101 (4–6), 179–202,doi:10.1016/j.jphysparis.2007.12.003.

Young, L., Koenigs, M., 2007. Investigating emotion in moral cognition: a review ofevidence from functional neuroimaging and neuropsychology. British MedicalBulletin 84, 69–79, doi:10.1093/bmb/ldm031.

Young, L., Saxe, R., 2008. The neural basis of belief encoding andintegration in moral judgment. NeuroImage 40 (4), 1912–1920,doi:10.1016/j.neuroimage.2008.01.057.

Young, L., Saxe, R., 2009a. An fMRI investigation of spontaneous mental state infer-ence for moral judgment. Journal of Cognitive Neuroscience 21 (7), 1396–1405.

Young, L., Saxe, R., 2009b. Innocent intentions: a correlation between forgivenessfor accidental harm and neural activity. Neuropsychologia 47 (10), 2065–2072,doi:10.1016/j.neuropsychologia.2009.03.020.

Zahn, R., Moll, J., Paiva, M., Garrido, G., Krueger, F., Huey, E.D., et al., 2009. The neuralbasis of human social values: evidence from functional MRI. Cerebral Cortex 19(2), 276–283, doi:10.1093/cercor/bhn080.

Zelazo, P.D., Helwig, Ch.C., Lau, A., 1996. Intention, act, and outcome in behavioralprediction and moral judgment. Child Development 67, 2478–2492.


Top Related