mind the gap(s): sociality, morality, and oxytocin

8
BOOK REVIEW Mind the gap(s): sociality, morality, and oxytocin Patricia Churchland—Braintrust: what neuroscience tells us about morality. Princeton University Press, 2011, 288 pp, ISBN-13: 978-0691156347 Benjamin James Fraser Received: 9 July 2013 / Accepted: 20 July 2013 / Published online: 3 September 2013 Ó Springer Science+Business Media Dordrecht 2013 Patricia Churchland pursues two aims in Braintrust. The first is descriptive: to give an account of the neural underpinnings of moral behaviour. The second is practical: to draw on that account for insight into current moral problems. While pursuing these aims, Churchland provides an accessible and engaging introduction not only to current neuroscience but to traditional and still vital debates in moral philosophy as well. Braintrust is admirably interdisciplinary and integrative in approach, and Churchland is optimistic and enthusiastic about the prospects for moving from experimental work toward an understanding of real-world human moral systems and issues. At the same time, she is sensibly restrained. She avoids—indeed, argues effectively against—‘gene-for-x’ foolishness, and clearly appreciates the need for care when bringing empirical studies to bear on philosophical questions. Her descriptions of neatly designed experiments and marvellously complex neurophys- iology are delightfully detailed. And, the big picture she paints is ambitious and inspiring if admittedly incomplete: a work-in-progress connecting ethics with neuroscience. Researchers interested in cooperation, moral psychology, and empirically-informed metaethics could happily and rewardingly immerse them- selves in Braintrust. The first part of the book, Chaps. 1 and 2, outlines Churchland’s project and specifies her explanatory target. Chapters 3–6, which comprise the second part, lay out the neural platform for moral behaviour. The third part, Chaps. 7 and 8, takes a neuroscientifically-informed look at moral disagreement and the relationship between religion and morality. The overall message of the book is that human morality is grounded in evolutionarily ancient and widely-shared mammalian affective systems—the hormone oxytocin is an especially important character in Churchland’s story about the evolution of morality—and that neuroscience can thus contribute significantly to our understanding of morality. B. J. Fraser (&) Australian National University, Canberra, ACT, Australia e-mail: [email protected] 123 Biol Philos (2014) 29:143–150 DOI 10.1007/s10539-013-9395-x

Upload: benjamin-james

Post on 23-Dec-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Mind the gap(s): sociality, morality, and oxytocin

BOOK REVIEW

Mind the gap(s): sociality, morality, and oxytocin

Patricia Churchland—Braintrust: what neuroscience tells us aboutmorality. Princeton University Press, 2011, 288 pp, ISBN-13:978-0691156347

Benjamin James Fraser

Received: 9 July 2013 / Accepted: 20 July 2013 / Published online: 3 September 2013

� Springer Science+Business Media Dordrecht 2013

Patricia Churchland pursues two aims in Braintrust. The first is descriptive: to give

an account of the neural underpinnings of moral behaviour. The second is practical:

to draw on that account for insight into current moral problems. While pursuing

these aims, Churchland provides an accessible and engaging introduction not only to

current neuroscience but to traditional and still vital debates in moral philosophy as

well. Braintrust is admirably interdisciplinary and integrative in approach, and

Churchland is optimistic and enthusiastic about the prospects for moving from

experimental work toward an understanding of real-world human moral systems and

issues. At the same time, she is sensibly restrained. She avoids—indeed, argues

effectively against—‘gene-for-x’ foolishness, and clearly appreciates the need for

care when bringing empirical studies to bear on philosophical questions. Her

descriptions of neatly designed experiments and marvellously complex neurophys-

iology are delightfully detailed. And, the big picture she paints is ambitious and

inspiring if admittedly incomplete: a work-in-progress connecting ethics with

neuroscience. Researchers interested in cooperation, moral psychology, and

empirically-informed metaethics could happily and rewardingly immerse them-

selves in Braintrust.

The first part of the book, Chaps. 1 and 2, outlines Churchland’s project and

specifies her explanatory target. Chapters 3–6, which comprise the second part, lay

out the neural platform for moral behaviour. The third part, Chaps. 7 and 8, takes a

neuroscientifically-informed look at moral disagreement and the relationship

between religion and morality. The overall message of the book is that human

morality is grounded in evolutionarily ancient and widely-shared mammalian

affective systems—the hormone oxytocin is an especially important character in

Churchland’s story about the evolution of morality—and that neuroscience can thus

contribute significantly to our understanding of morality.

B. J. Fraser (&)

Australian National University, Canberra, ACT, Australia

e-mail: [email protected]

123

Biol Philos (2014) 29:143–150

DOI 10.1007/s10539-013-9395-x

Page 2: Mind the gap(s): sociality, morality, and oxytocin

This review will have relatively little to say about the second part of Braintrust.

There, Churchland describes the neural platform for moral behaviour. The platform

comprises, she claims, four interlocking brain processes: caring (Chaps. 3 and 4),

learning (Chap. 5), mind-reading and social problem-solving (Chap. 6). ‘‘Morality,’’

Churchland tells us, ‘‘originates in the neurobiology of attachment’’ (p. 71).

Organisms value their own well-being, and in social species, the neural and

hormonal mechanisms subserving self-care have been recruited, elaborated, and

integrated with other capacities to allow caring about the well-being of others:

offspring, extended kin, mates, in-group members, and even (sometimes) strangers.

As noted above, a central character in Churchland’s story about how this

elaboration from self-care to sociality has occurred is the hormone oxytocin.

Churchland goes to some lengths to defend the idea that oxytocin has been exapted

from its role in offspring care to serve as the ‘‘hub’’ (p. 14) for a network of

mammalian adaptations for sociality. Her strategy is to draw on data from

experimental neuro-economics to demonstrate the effect of increased oxytocin

levels on trust, cooperation, in-group/out-group dynamics, and theory-of-mind skills

(pp. 71–81). The experiments Churchland describes are ingenious, their results

intriguing, and she makes a convincing case for a connection between oxytocin and

cooperation.

This is, really, the core of the book, so a reviewer’s neglect of it requires

explanation. The explanation is simple. Here, Churchland is on her firmest ground,

and the material is well-presented and hard to fault. I can only recommend it to

anyone interested in learning about the neuroscience of cooperation.

This review will focus instead on parts one and three of Braintrust, where

Churchland sets up her project and characterizes her explanatory target, and where

she considers the implications of her empirical data for moral philosophy and for

real-world moral problems. There is much in these parts of the book that is

interesting and worthy of comment. My commentary will aim at identifying ways to

build upon Churchland’s work in Braintrust.

When it comes to giving a definition of ‘morality’, Churchland demurs; such

‘‘semantic wrangles’’ are ‘‘unrewarding’’ (pp. 9, 26). She does tell us, however, that

social behaviour is that which has fitness consequences for both the agent and

recipient (p. 66), and moral behaviour is social behaviour of ‘‘great seriousness’’

(pp. 10, 59–60). What is the measure of ‘seriousness’ here? Churchland adopts a

values-first view of morality—she begins by asking what it is that we value. Note, to

value something is just to care about it, and this is a very basic sort of state: ‘‘caring

is a ground-floor function of nervous systems’’ (p. 30). Her answer, in brief, is that

we value our own well-being and that of our family and friends, as well as the

experience of belonging (p. 12). On Churchland’s view, then, morality is a matter of

social navigation when what we value is at stake: when we must care, trust,

coordinate, cooperate, punish, and reconcile. To explain morality is to explain how

it is that we have social values and how it is that we secure what we value. From this

perspective, Churchland is ‘‘sceptical [about] the view that rules and their

conscious, rational application are definitive of morality’’ (p. 166). Churchland

associates this view with Kant, and his ideas about rationality, universalizability and

obligation. And, she claims, ‘‘Kant’s conviction that detachment from emotions is

144 B. J. Fraser

123

Page 3: Mind the gap(s): sociality, morality, and oxytocin

essential in characterizing moral obligation is strikingly at odds with what we know

about our biological nature’’ (p. 175).

There are reasons to resist Churchland’s characterization of morality, though.

First, Churchland’s view of moralizing sees it as a low-level, affect-driven process.

She conducts her critique of the idea that rules and reasoning are central to morality

with reference Kant’s peculiar account of what underwrites and vindicates moral

rules’ authority (pp. 173–175) and to problem-prone varieties of maximizing

consequentialism (pp. 175–181). This is a mistake. One can share Churchland’s

scepticism about Kant or maximizing consequentialism, while also thinking that her

account neglects something important about the nature of moralizing. It is one thing

to reject Kant’s view of the primacy of conscious, explicit moral opinion, or

Singer’s strong demands on our charity; another to ignore the existence of moral

reasoning or to dismiss it as merely epiphenomenal. (And, as pointed out by Mikhail

(2013) in his own review of Braintrust, Churchland’s focus on explicit reasoning

and rule following may be misplaced: moral rules may operate unconsciously, as do

rules of grammar in the case of language.)

I should acknowledge that Churchland never claims to cover all there is to say

about morality: the ‘‘neural platform’’ is ‘‘not the whole story of human values’’

(p. 3); the ‘‘social platform’’ is ‘‘not the sum or the substance of morality’’ (p. 10).

But, she implies that the extra stuff is cultural (p. 3), rather than being some

additional feature(s) of our moral psychology, above and beyond mere affective

caring. She neglects the more cognitivist aspects of our moral psychology almost

entirely. This seems to me a serious oversight.

If it is granted that there is a disconnect between brute affective response and

moral judgment, then a second reason to resist Churchland’s characterization of

morality becomes apparent, namely, what I’ll dub the social/moral gap. Morality

and sociality significantly overlap, but don’t coincide. Not all social interaction—

not even all serious social interaction—is moral interaction: consider former

Australian Prime Minister Paul Keating in 1992 placing his arm around Queen

Elizabeth (a horrid social blunder but not a moral wrong). Nor, on some influential

accounts, is the moral sphere contained entirely within the social sphere: Jonathan

Haidt has built a weighty case for the existence of purity-based moral systems that

are not built on social notions of harm, rights, or hierarchy (see e.g. Haidt 2007).

Even fully explaining sociality would leave a significant remainder if the task is

to explain morality, and this is something that goes under-recognized, I think, in

Braintrust. Positively, though, here is potential for a significant and interesting

extension of Churchland’s project: to investigate the features distinguishing moral

evaluation in particular from evaluation more generally, and to describe the

neurophysiological basis for this kind of valuing. Here’s an open question worth

worrying about—unlike Moore’s, on which Churchland (mis)spends too much time

(pp. 186–190)—and moreover one to which her expertise is highly relevant. Leave

Moore to the metaethicists. Let’s hear more about the evolutionary history and

neurophysiological basis of moralizing, as distinct from (even if built upon) brute

caring.

Pointing out the social/moral gap is not mere philosopher’s griping, nor is it an

attempt to crown myself ‘‘meaning czar’’ (p. 25) when it comes to what counts as

Mind the gap(s) 145

123

Page 4: Mind the gap(s): sociality, morality, and oxytocin

morality. That properly characterizing the explanatory target in a project like

Churchland’s really matters, can be seen by looking at the downstream

consequences of her view of morality, specifically, at its ramifications for her

treatment of the is/ought gap and the question of animal morality.

A robust generalization about books that aspire to link biology and psychology

with moral philosophy is that they will at some point mention the is/ought gap and

the naturalistic fallacy. Braintrust is no exception.

Churchland’s response to the claim that one cannot infer an ought claim solely

from is claims, is to distinguish between two senses of ‘infer’. If ‘infer’ means

‘logically derive’, then she is happy to allow that one cannot infer an ought-claim

from is-claims alone. But, if ‘infer’ is taken to mean ‘figure out’, then, she claims,

we can and very often do infer oughts from what is: ‘‘we regularly figure out what

we ought to do based on the facts of the case, and our background understanding’’

(p. 6). Most real world problem solving is done not via deduction but rather by

satisficing, figuring out an acceptable solution—not necessarily the best one—given

the constraints of the particular problem. So, Churchland observes, the fact ‘‘that

you cannot derive an ought from an is has very little bearing so far as in-the-world

problem-solving is concerned’’ (p. 7). Churchland takes the warning against

deriving an ought from an is to apply only to deductive inference; it can be

disregarded by those pursuing her kind of project. We can arrive at ought claims,

she thinks, by adopting a ‘‘neurobiological perspective on what reasoning and

problem-solving are, how social navigation works, how evaluation is accomplished

by nervous systems, and how mammalian brains make decisions’’ (p. 8).

Churchland is right that ‘‘relative to [our] values, some solutions to social

problems are better than others, as a matter of fact’’ (p. 9). But, distinguishing

between deduction and satisficing does not get one across the is/ought gap. Whether

we are doing deduction or satisficing, we need some ought-claims among the stock

of information we are working with, in order to arrive at a conclusion about what we

ought to do. My suspicion is that Churchland has smuggled in such claims when she

mentions ‘our background understanding’: if part of our background understanding

of a particular problem case includes understanding what we and others value, and

understanding that we ought to act so as to secure what we and others value, then, of

course, we can get from that plus an understanding of the facts of the case, to a

conclusion about what ought to be done. It may be that Churchland’s values-first

view of morality is again the culprit here: backgrounding explicit moral judgment in

favour of affective response and practical action makes it easy for her to overlook

the problem of justifying (as opposed the simply making) ought-judgments in the

first place.

Churchland urges us to ‘‘recogniz[e] moral problems for what they are—difficult,

practical problems emerging from living a social life’’ (p. 201). But, to treat moral

oughts simply as prudential is to trivialize the worry underlying the is/ought

objection. Given what we value, what ought we do? That is an important question

and often a challenging one to answer in real-world cases. To answer it is not,

however, to bridge the is/ought gap. The fundamental question is something more

like: given what is the case, what ought we value? The sort of descriptive,

explanatory project Churchland pursues in Braintrust will not answer that question.

146 B. J. Fraser

123

Page 5: Mind the gap(s): sociality, morality, and oxytocin

That in itself is not a black mark against it. But, by characterizing morality as simply

social problem-solving, Churchland ends up overselling her project as bridging the

is/ought gap when it fact it does, and crucially need, not.

To be clear: moral problems often are difficult practical problems. But problems

with morality are not limited to practical ones. There are explanatory problems, both

mechanistic—how is it that we make moral judgments?—and functional—why do

we do so? There are epistemological problems: are our moral judgments true, and

justified? The relationships between these questions can be complex. One

interesting possibility is that answers to the explanatory questions might supply

reason for scepticism about the epistemic status of some or even all moral

judgments (Singer 2005; Joyce 2006). Churchland’s move to make moral judgment

just a species of prudential judgment made in social contexts shuts down this line of

enquiry, and doesn’t do justice to the full range of philosophically interesting

questions to be found at the intersection of biology, psychology, and ethics.

As well as impacting discussion of her treatment of the is/ought gap,

Churchland’s characterization of morality also has ramifications for her view about

non-human morality. Churchland asks if humans are uniquely moral or whether

other animals also have morality. There is, she says, no ‘‘meaning czar’’ (p. 25)

whose opinion rules the use of words, and no progress to be made simply by

stipulating that only humans have morality. She considers proposals that make

reasoning, rule-following, or language prerequisites for morality too restrictive,

since they rule out non-human animal morality and it is obvious that some non-

human animals have social values: ‘‘they care for juveniles, and sometimes mates,

kin, and affiliates; they cooperate, they may punish, and they reconcile after

conflict’’ (p. 26, see also p. 166). Churchland insists that, insofar as all social

animals face social navigational challenges, and hence engage in social problem-

solving, all social animals have their own kind of morality. And, if it is only humans

who possess human morality, only marmosets have marmoset morality (p. 24); so

what? When it comes to morals, we’re just another unique social species.

Churchland is of course right that there is no meaning czar, and that simply

stipulating to restrict morality to ourselves is uninteresting. But notice, her own

move trivializes the debate about non-human morality. She allows for non-human

morality by equating morality with sociality, with caring for others and engaging in

social problem-solving (pp. 26, 166). This thins out the notion of morality so far that

the original controversy is lost. I doubt that any who deny that non-human animals

are genuinely moral would deny that some non-human animals are genuinely social

(and any who did would be straightforwardly wrong). This sort of ‘thinning out’

move is as uninteresting as just stipulating that non-human animals lack morality.

At this point, having complained about Churchland’s characterization of

morality, it would be good to offer a positive suggestion. What account of morality

would neither trivialize a human uniqueness claim, nor make the claim that apes

(or ants) are moral beings trivial also? Unfortunately, we don’t have a firm grasp on

the phenomena in the human case. Controversy over the existence of a moral/

conventional distinction, for instance, continues (Turiel 1983; Kelly et al. 2007;

Fraser 2012), and philosophers and psychologists remain divided on whether

ordinary folk suppose morality to possess some special sort of objectivity

Mind the gap(s) 147

123

Page 6: Mind the gap(s): sociality, morality, and oxytocin

(Nichols 2004; Goodwin and Darley 2008, 2010; Sarkissian et al. 2011; Fraser

2013a). When the nature of the explanatory target is so unclear even in our case, it is

hard to be confident of getting a reliable cross-species assay for morality.

To sum up: when it comes to characterizing her explanatory target, my feeling is

that Churchland has captured an important part of morality—the affective,

evaluative dimension of moralizing—but only a part. I imagine she wouldn’t

disagree, but, it also seems to me that she forgets her own sensible disclaimers at

times, and just equates morality with sociality, to poor effect. To succeed in her

project, Churchland need not jump the is/ought gap, but she does need to clear the

social/moral gap, and falls far short here.

Moving on to part three of Braintrust: here, Churchland considers the role of

rules and religion in morality. Regarding rules, she claims that her values-first view

of morality provides both a better account of actual moral practice than does a rules-

first view, as well as a better set of tactics for resolving moral conflicts. Regarding

the relationship between religion and morality, Churchland’s claim is that ‘‘the

connection is mainly sociological rather than metaphysical’’; morality need not—

and perhaps cannot (Plato’s Euthyphro again, pp. 194–196)—have a supernatural

basis.

Churchland’s claims in these chapters are mostly sensible but never very

surprising, and her engagement with moral philosophy is often superficial. She

spends quite some time pointing out the tarnish on the Golden Rule (pp. 168–173),

for instance, and takes utilitarians to task for requiring us to perform unworkable

utility maximization calculations, to neglect our nearest and dearest, and to perform

intuitively wrong acts (such as scapegoating the innocent for the greater good). This

is weak stuff. Such objections are familiar and much-discussed in moral

philosophy—the calculating objection was anticipated and addressed by Mill

himself—and Churchland’s treatment of them is too cursory to add significantly to

the extant debate.

More importantly, the link to neuroscience is seldom apparent here, and

Churchland’s own positive views are presented swiftly and sketchily. For example,

Churchland suggests a ‘prototype-based’ view of moral development and compe-

tence (pp. 180, 183). According to this model, agents use a cluster of exemplary

cases as the basis for generalization and analogy in moral decision-making. This

model is interesting insofar as it may give a more accurate representation of actual

agents’ moral practices than either automatic intuitive reaction or explicit rule-

following, and insofar as it may offer insight into moral disagreement (as

Churchland herself notes, different agents will, as a result of varying individual

histories, draw on different cases to guide judgment). Churchland offers an

admittedly amusing anecdote about an episode of the Colbert Report in support of

her proposed model, but obviously far more is needed to really see if it has legs. The

chapter on rules would have benefited, I feel, from a good deal less rehashing of

objections to Kant and Mill, and a good deal more empirical detail supporting

Churchland’s positive proposals.

The chapter on religion and morality is notable for its intriguing mention of the

idea that, if our capacity for morality is merely the product of evolution, and there is

no moral law-giving God, then morality is an illusion (pp. 199–201). Churchland

148 B. J. Fraser

123

Page 7: Mind the gap(s): sociality, morality, and oxytocin

identifies Francis Collins as an adherent of this view. There are many more (and

more philosophically sophisticated) folks who debate this sort of question though,

both for and against (Joyce 2006; Street 2006; Prinz 2007; Copp 2008; Kahane

2011; Kitcher 2011; Fraser 2013b). Darwinian debunking of morality is a thriving

topic in current empirically-oriented metaethics. So, there is an issue of real interest

and importance here. Churchland’s own view is that ‘‘morality is as real as it can

be—it is as real as social behaviour’’, because morality is ‘‘grounded in our

biology’’ (p. 200).

When setting up her project, Churchland said the aim was ‘‘to understand what it

is about the brains of highly social mammals that enables their sociability and thus

to understand what grounds morality’’ (p. 10, emphasis added). There is an

important ambiguity lurking in this talk of ‘grounding’, however, and the sense in

which Churchland has show morality is grounded in biology is not the sense in

which morality must be grounded in order to allay the worry that it is an illusion.

In one sense—call it grounding as explanation—Churchland has made

substantial progress on showing how morality is grounded in our biology: she has

detailed the workings and evolutionary history of various capacities that together

contribute to our capacity for moral judgment and moral behaviour. In another sense

of ‘grounded’ however—call it grounding as justification—Churchland has not

supported the claim that morality is grounded in biology. Showing that we value

certain things, even providing great detail about how and why we value them,

doesn’t itself show those things really are valuable; showing that, and how, and why

we think we ought perform certain actions doesn’t itself show those actions really

are obligatory. A useful parallel here is with religion. Suppose that religious beliefs

are grounded, in the explanatory sense, in our biology. This does not itself show

such beliefs to be true or justified; indeed, there is some temptation to think it

suggests quite the opposite.

Crucially, it is the latter sort of grounding claim that must be substantiated to

address the worry that morality is an illusion. That worry concerns the metaphysical

and epistemological footing of morality. Pointing out that some atheists are

perfectly nice people to whom ‘‘morality is entirely real’’ (p. 201) misses the point.

The heavyweight philosophical debate about the issue doesn’t ask whether anyone

could care about, share with, or trust in others, were morality to be the product of

evolution, because that is, to put it bluntly, a silly question. It seems that Churchland

refutes the ‘illusion’ worry only by first mischaracterizing it.

Here, then, is another point at which her project could be extended, elaborated,

and built upon. There has been a recent upsurge in genealogical works in moral

philosophy, and it is now widely—if not universally; see White (2010)—accepted

that genealogy matters to morality. Just how genealogy matters, though, is model

sensitive: the normative and/or metaethical consequences of a genealogy of morality

depend on the specific details about the origins, nature, and function(s) of moral

cognition in that genealogy. It would be interesting to explore how Churchland’s

ultimate evolutionary explanations for our possession of the capacity for morality,

together with the details she gives about the proximate psychological and

neurophysiological mechanisms of morality, play into the serious philosophical

debate about morality, evolution, and debunking.

Mind the gap(s) 149

123

Page 8: Mind the gap(s): sociality, morality, and oxytocin

The final observation I’ll make in this review is tied less tightly to specific claims

Churchland makes in Braintrust, and more concerned with the overall perspective

she adopts on moralizing. As should by now be clear, she takes moralizing to be a

kind of social technology. We moralize as a way of solving social problems.

Churchland is far from alone in taking this view of morality, and there is much to be

said for it (see e.g. Kitcher 2011). Moralizing can bolster cooperation and help to

build and maintain social networks.

Still, there is a darker side to morality. Churchland herself criticizes specifically

religion-based moralizing as a dangerous breeder of intolerance and arrogance (p.

200). But moralizing itself might be criticized on the very same grounds. Moralizing

can reinforce in-group versus out-group dynamics, for instance, and it can entrench

disagreement and so prevent mutually beneficial compromise. Morality may not be

entirely benign, qua social technology. If so, then the extent to which morality is

part of the solution to the problem of social living, as opposed to part of the problem

itself, becomes extremely interesting.

To sum up: Churchland in Braintrust does an excellent job of answering a

question that is not quite the one she set out to answer—a question about the

neuroscience of sociality rather than of morality per se—but that question is

important nonetheless, and her answer valuable.

References

Copp D (2008) Darwinian scepticism about moral realism. Philos Issue 18:186–206

Fraser B (2012) The nature of moral judgements and the extent of the moral domain. Philos Explor

15(1):1–16

Fraser B (2013a) Moral error theories and folk metaethics. Philos Psychol

Fraser B (2013b) Evolutionary debunking arguments and the reliability of moral cognition. Philos Stud

Goodwin G, Darley J (2008) The psychology of metaethics: exploring objectivism. Cognition

106:1339–1366

Goodwin G, Darley J (2010) The perceived objectivity of ethical beliefs: psychological findings and

implications for public policy. Rev Philos Psycho 1:161–188

Haidt J (2007) The new synthesis in moral psychology. Science 316:998–1002

Joyce R (2006) The evolution of morality. MIT Press, Cambridge

Kahane G (2011) Evolutionary debunking arguments. Nous 45(1):103–125

Kelly D, Stich S, Haley K, Eng S, Fessler D (2007) Harm, affect, and the moral/conventional distinction.

Mind Lang 22:117–131

Kitcher P (2011) The ethical project. Harvard University Press, Cambridge

Mikhail J (2013) Review of brain trust. Ethics 123(2):354–356

Nichols S (2004) After objectivity: an empirical study of moral judgment. Philos Psychol 17(1):3–26

Prinz J (2007) The emotional construction of morals. Oxford University Press, Oxford

Sarkissian H, Park J, Tien D, Wright J, Knobe J (2011) Folk moral relativism. Mind Lang 26(4):482–505

Singer P (2005) Ethics and intuitions. J Ethics 9:331–352

Street S (2006) A Darwinian dilemma for realist theories of value. Philos Stud 127:109–166

Turiel E (1983) The development of social knowledge. Cambridge University Press, Cambridge

White R (2010) You just believe that because. Philos Perspect 24:573–615

150 B. J. Fraser

123