chapter 4: moral perception: how individuals respond to ... · pdf filechapter 1: the building...

44
Chapter 4: Moral Perception: How individuals respond to moral messages Brad Jones September 15, 2014 Abstract In this chapter, I examine the ways in which individuals’ pre-political disposi- tions guide their reactions to political texts. Using data on individual reactions to real political statements drawn from the Congressional Record, I show that individuals’ reac- tions are significantly related to their predispositions. Using an innovative method for coding the political statements, I show that individuals are sensitive to the the content of the statements in predictable ways. In many cases, the reaction to texts that were congru- ent with their predispositions was significantly greater than partisan congruence. Chapter 1: The Building Blocks of Politics Chapter 2: Intuitive Politics: Individual-level predispositions and political attitudes Chapter 3: Representation of Predispositions: How masses constrain elites Chapter 4: Moral Perception: How individuals respond to moral messages Chapter 5: Rhetoric and Predispositions 1

Upload: hadat

Post on 11-Mar-2018

218 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Chapter 4: Moral Perception: How individuals

respond to moral messages

Brad Jones

September 15, 2014

Abstract In this chapter, I examine the ways in which individuals’ pre-political disposi-tions guide their reactions to political texts. Using data on individual reactions to realpolitical statements drawn from the Congressional Record, I show that individuals’ reac-tions are significantly related to their predispositions. Using an innovative method forcoding the political statements, I show that individuals are sensitive to the the content ofthe statements in predictable ways. In many cases, the reaction to texts that were congru-ent with their predispositions was significantly greater than partisan congruence.

Chapter 1: The Building Blocks of PoliticsChapter 2: Intuitive Politics: Individual-level predispositions and political attitudesChapter 3: Representation of Predispositions: How masses constrain elitesChapter 4: Moral Perception: How individuals respond to moral messagesChapter 5: Rhetoric and Predispositions

1

Page 2: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

1 Introduction

During a July 2012 campaign stop, Barack Obama made some comments that he ended

up regretting.

”If you were successful, somebody along the line gave you some help. There

was a great teacher somewhere in your life. Somebody helped to create this

unbelievable American system that we have that allowed you to thrive. Some-

body invested in roads and bridges. If you’ve got a business – you didn’t build

that. Somebody else made that happen.” 1

In their convention, the Republicans dedicated an entire day to the theme ”We Built

It,” and President Obama’s ”you didn’t build that” was featured ubiquitously in Repub-

lican advertisements.

This inartful construction by Obama tickled something in many conservative minds. It

symbolized a president who, from their perspective, didn’t value hard work or individual

effort. It represented a moral failing of a political party that champions the ”takers” at the

expense of the ”makers.” Conversely, many Democrats and liberals doubled down and

saw the Republican response as a brash rejection of community, and a fundamentally

hypocritical position given the fact that the Republican convention was being held at a

venue that had been constructed mostly with government funds.2 This deep moral divide

was an important part of the overall story of the 2012 campaign.

Divides in politics often seem unbridgeable. As politics has polarized at the mass-

level, politicians have seized the opportunity to exploit preexisting moral divides. In a

1Emphasis added. From a transcript available at http://www.factcheck.org/2012/07/you-didnt-build-that-uncut-and-unedited/. There is some controversy about this line. It seems most likely to me thatit resulted from Obama unintentionally skipping a line from the teleprompter. As with most campaign”gaffes,” the statement sticks in the public conversation because it symbolizes some larger perceived oractual fault with the individual or political idea.

2For example, http://mediamatters.org/blog/2012/08/22/fox-approved-convention-theme-contradicted-by-p/189507.

2

Page 3: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

certain sense this is inevitable and expected from election-minded politicians in a two-

party system. Competition between the two parties should reveal these moral fault lines

in the public (Kollman, Miller and Page, 1992, 1998), and once they are clear, they are too

tempting to leave alone (Hillygus and Shields, 2008).

In this chapter, I will examine the factors that lead individuals to react differently to

moralized political rhetoric. I focus on two potential sources of individual-level variation

in moral judgments. The first are stable individual-level moral predispositions. Differ-

ences in moral concern are measurable and exert a real impact on the ways in which

individuals process political information. The second factor I examine is the interaction

between partisan cues and individual moral judgments.

Using data from ratings collected from 172 subjects who reacted to more than 1,000

sentences sampled from the Congressional Record, I show how these two factors – pre-

dispositions and partisan cues – affect moral judgments in significant and substantively

important ways. My results suggest that moral judgments are domain specific. Individ-

uals vary significantly in the emphasis they place on different moral domains, and these

individual differences result in substantially different reactions to the same content. In-

dividuals also react to partisan source cues in predictable ways. When statements are

attributed to copartisans, they are received less critically than those that come from the

other team. However, in many cases, the effects of partisan source cues are dwarfed by

the effects of moral rhetoric that resonates with one’s own moral priorities.

2 Theory and Literature Review

The study of individual political attitudes has long focused on ideology, and for almost as

long as scholars have looked to ideology as an anchor for citizen preferences, they have

been disappointed in the ability of the average citizen to understand the world in ideo-

3

Page 4: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

logical terms. As construed in the literature, ideology generally refers to a coherent set

of ideas that can be mapped logically onto policy positions and other aspects of political

life. It has become axiomatic among scholars of American public opinion to note that the

general public is almost entirely ”ideologically innocent” (Converse, 1964).3

2.1 Predispositions and Political Attitudes

Given the well-documented failure of the public to think ideologically, scholars of indi-

vidual attitudes have proposed values (Feldman, 1988; Goren, 2001; Schwartz, Caprara

and Vecchione, 2010), personality (Mondak et al., 2010; Gerber et al., 2010), or some other

set of stable predispositions (Hibbing, Smith and Alford, 2013; Kinder and Kam, 2010;

Stenner, 2005; Goren, 2013) as alternative structuring factors in the public mind. In this

alternative depiction, individuals bring a slate of preferences and values to the political

world (Leeper and Slothuus, 2014). While these predisposing factors are not organized

enough to qualify as an ”ideology,” they serve many of the same purposes for the in-

dividual confronting the political world. These individual-level factors inform choices

between and evaluations of political figures and issue positions.

Moral values provide a promising approach to defining individual predispositions. In

recent years, moral psychology has seen something of a renaissance, and several schol-

ars have made important advancements in the ways that we understand human moral-

ity (Haidt, 2007; Sinnott-Armstrong, 2008). The emerging scholarly account of morality

paints it in functionalist terms. Our deep-seated moral feelings have evolved to enable

us to function as social animals (Wilson, 2012; Giner-Sorolla, 2013), and navigate complex

interpersonal relationships (Rai and Fiske, 2011). These same moral intuitions have ready

application in the broader political world.

3But see Jost (2006) for a vigorous defense of the idea that ideological self-identification means somethingin everyday political thinking.

4

Page 5: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Political scientists have also turned their attention to moral values (Clifford and Jerit,

2013; Weber and Federico, 2013). Importantly, it has been shown that, far from being a

separate area of concern, individuals moralize a great many issues (Ryan, 2014b). Moral-

ization has important consequences. When attitudes become moralized, individuals are

less willing to compromise, they become less tolerant of those with whom they disagree,

and they become more willing to take costly action to defend their moral values (Skitka,

Bauman and Sargis, 2005; Skitka, 2010; Tetlock et al., 2000; Ryan, 2014a).

Moral foundations theory (Haidt, 2012) provides a useful framework to understand

individual differences in moral concern. Jonathan Haidt and his colleagues have shown

that the moral foundations measures correlate strongly with various measures of political

ideology (Haidt, Graham and Joseph, 2009; Graham, Haidt and Nosek, 2009)4 and politi-

cal issue positions (Koleva et al., 2012). The moral foundations are theorized to delineate

a small number of widely shared moral concerns, and as such they enable us to look at

different domains of morality.

4Alternative operationalizations of moral values have also been shown to relate to political attitudes, seeJanoff-Bulman (2009).

5

Page 6: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Dom

ain

Des

crip

tion

Sim

ilar

conc

epts

Car

e/H

arm

Rel

ated

toou

rlo

ngev

olut

ion

asm

amm

als

wit

hat

tach

men

tsys

tem

san

dan

abili

tyto

feel

(and

disl

ike)

the

pain

ofot

hers

.Itu

nder

lies

virt

ues

ofki

ndne

ss,g

entl

enes

s,an

dnu

rtur

ance

.

”Nur

tura

ntPa

rent

Mor

alit

y”(L

akof

f,20

10;B

arke

ran

dTi

nnic

k,20

06);

Post

-mat

eria

lism

(Ing

leha

rt,

1997

)

Fair

ness

/Che

atin

gR

elat

edto

the

evol

utio

nary

proc

ess

ofre

cipr

ocal

altr

uism

.Thi

sfo

unda

tion

gene

rate

sid

eas

ofju

stic

e,ri

ghts

,and

auto

nom

y.

“Equ

alit

yof

Opp

ortu

nity

”(F

eldm

an,1

988;

Jaco

by,2

006)

,fa

irne

ss(H

ochs

child

,198

6)

Res

pect

/Sub

vers

ion

Shap

edby

our

long

prim

ate

hist

ory

ofhi

erar

chic

also

cial

inte

ract

ions

.Thi

sfo

unda

tion

unde

rlin

esvi

rtue

sof

lead

ersh

ipan

dfo

llow

ersh

ip,

incl

udin

gde

fere

nce

tole

giti

mat

eau

thor

ity

and

resp

ectf

ortr

adit

ions

.

“Str

ictF

athe

rM

oral

ity”

(Lak

off,

2010

;Bar

ker

and

Tinn

ick,

2006

),A

utho

rita

rian

ism

(Fel

dman

and

Sten

ner,

1997

;Ste

nner

,200

5)

Loya

lty

/Bet

raya

lR

elat

edto

our

long

hist

ory

astr

ibal

crea

ture

sab

leto

form

shif

ting

coal

itio

ns.T

his

foun

dati

onun

derl

ies

virt

ures

ofpa

trio

tism

and

self

-sac

rific

efo

rth

egr

oup.

Itis

acti

vean

ytim

eth

atpe

ople

feel

that

it’s

“one

for

all,

and

allf

oron

e.”

Min

imal

grou

pth

eory

(Taj

fela

ndTu

rner

,197

9),N

atio

nalis

m(R

occa

s,Sc

hwar

tzan

dA

mit

,201

0;Li

and

Brew

er,2

004)

Sanc

tity

/Deg

rada

tion

Shap

edby

the

psyc

holo

gyof

disg

usta

ndco

ntam

inat

ion.

This

foun

dati

onun

derl

ies

relig

ious

noti

ons

ofst

rivi

ngto

live

anel

evat

ed,

less

carn

al,m

ore

nobl

ew

ay.I

tund

erlie

sth

ew

ides

prea

did

eath

atth

ebo

dyis

ate

mpl

ew

hich

can

bede

secr

ated

byim

mor

alac

tivi

ties

and

cont

amin

ants

(an

idea

notu

niqu

eto

relig

ious

trad

itio

ns).

Dis

gust

(Inb

ar,P

izar

roan

dBl

oom

,20

09;T

erri

zziJ

r,Sh

ook

and

Vent

is,

2010

;Inb

aret

al.,

2012

)

6

Page 7: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

In this chapter, I will focus on five moral foundations.5 These foundations are listed

along with brief descriptions and analogous concepts in Table 1.6

2.2 An Aside on ”Morality” in Political Science

”Morality,” as Haidt puts it, ”binds and blinds” (Haidt, 2012); it knits us together into

groups while simultaneously closing us off from outsiders (Ryan, 2012). Because moral

judgement is so closely related to emotion, it is also a mobilizing force (Pagano and Huo,

2007; Valentino et al., 2011; Leonard et al., 2011).

The political science literature is speckled with references to ”morality,” but for the

most part when we talk about it at all, our discipline equates ”moral” with religious or

behavioral norms (Ben Nun, 2010; Bloom, 2013). The term is often used as a catch-all

category for issues that don’t fit well into other categories. Morality is not often studied

explicitly nor defined carefully, so my own usage of the term will borrow heavily from

moral psychology.

Harold Lasswell’s famous definition—”Politics is about who gets what, when, and

how”—underscores the importance of considering moral concerns in political decisions.

If it is true that politics is fundamentally about distributing or redistributing resources,

the question of deservingness becomes fundamental, and the definition of ”moral issues”

becomes much more encompassing and subjective. Ryan (2014b) uncovered considerable

variability in the issues that individuals considered ”moral.” Rather than imposing a clas-

sification scheme onto particular issues (i.e., abortion, gay marriage), we need to consider

how individuals connect a great many issues to their core moral values.

5The canon of ”Moral Foundations Theory” is by no means closed. Haidt and his colleagues continue toexplore new candidate foundations. In this chapter, I focus on the five that have received the most attentionin the literature so far.

6Definitions in the table were taken from www.moralfoundations.org

7

Page 8: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

2.3 Unpacking the ”Black Box”

Existing accounts of political cognition (for example, the extensive body of work collected

by Lodge and Taber (2013)) focus on the processes by which individuals arrive at their

political opinions. Lodge and Taber’s experiments have convincingly and repeatedly

demonstrated the importance of motivated reasoning in understanding how individu-

als process new information and incorporate prior information in their attitude reports.

What is missing from these theories is an account of the motivations that feed into the

original model. Moral foundations theory gives us one way to get at these motivations.

In broad strokes, the Lodge and Taber account shows that affect is a critical component

of political evaluations. Immediately upon being exposed to some new piece of informa-

tion, we experience a flash of positive or negative affect. This affective response colors

subsequent processing and facilitates or inhibits other concepts. This process leads to

reasoning that is systematically biased in favor of one’s own predispositions.

In most of the work on information processing, individual predispositions (the factors

that lead to positive or negative affect) are ”black-boxed.” For example, liberals are ex-

pected to experience negative affect when evaluating conservative issue positions and the

Republican Party, but conservatives are expected to have an opposite pattern of reactions.

From this starting point, those who study information processing can make predictions

and evaluate hypotheses about the consequences of motivated reasoning. In this paper, I

am interested in shifting the focus back to the factors that lead different people to react to

the same stimuli in different ways in the first place.

2.4 Hypotheses

One of the key aims of this chapter is to establish the validity of the moral foundations

measures in realistic political settings. Moral foundations theory describes stable individ-

8

Page 9: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

ual differences in moral outlook. In this dissertation, I am proposing that these individual

differences in moral concern are translated into meaningful political differences. If an indi-

vidual’s moral outlook colors his or her perception of political information, we will have a

piece of the story as to how these stable individual predispositions translate into political

divisions.

2.4.1 Differential emotional reactions

I expect that individual differences in moral concern (as measured by responses to the

Moral Foundations Questionnaire) will explain some of the variance in moral judgments

and the intensity of emotional reactions to political arguments. To paraphrase an old

cliche, morality is in the eye of the beholder.7

H1: Individual differences on the moral foundations scales will be related to

differential emotional reactions to the same stimuli.

These differential emotional responses should be domain specific. I expect different

moral foundations to be salient in different contexts. For example, individual differences

on the Care/Harm measure should be more important to evaluations of arguments that

focus on the hurt experienced by victims than would be the case for arguments that ap-

peal to undeserving treatment (where I would expect the Fairness/Cheating domain to

be relatively more influential).

2.4.2 Emotional Content of Moral Evaluations

Recent theoretical and empirical work in moral psychology suggests that a large por-

tion of our moral judgments are driven by automatic affective reactions. The same work

7This is not to say that moral foundations theory necessarily implies moral relativism. One need not takea position on the existence of objective ”moral facts” to observe that moral perceptions are definitionallysubjective.

9

Page 10: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

that informs Lodge and Taber’s model of political information processing serves as the

foundation of the affective turn in moral psychology. In laboratory experiments, re-

searchers have manipulated affective states and found significant effects on moral judg-

ment (Schnall, Benton and Harvey, 2008; Valdesolo and DeSteno, 2006; Eskine, Kacinik

and Prinz, 2011). These findings are in line with an intuitionist conception of moral judg-

ment (Haidt, 2001; Prinz, 2006).8 Consistent with this recent line of research, I expect that

more intense emotional reactions will be related to an increased probability of making a

moral judgment.

H2: Individuals’ moral evaluations are connected to their emotional reactions.

Individuals who experience a stronger (either negative or positive) reaction to

a political statement will be more likely to feel that it is connected to their ideas

of right and wrong.

2.4.3 Partisan bias

Finally, I will look at the interaction between party identification, partisan source cues,

and moral judgment. I expect that receiving source cues will significantly alter moral

judgments. Same-party cues are likely to magnify affect and thus lead to a higher like-

lihood of moral classification. Out-party cues are expected to have the opposite effect –

dampening affective responses and leading to a decreased likelihood of moral classifica-

tion.

H3: Partisanship biases moral processing. In-party cues will be seen as more

closely related to an individual’s sense of right and wrong, while out-party

will be seen as less related.8This point is not settled in the psychological literature. Pizarro and Bloom (2003) argue that rational delib-eration plays a much larger role in informing our intuitions than is generally allowed in Haidt’s model. Ithas been difficult to settle this controversy with laboratory experiments.

10

Page 11: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Partisan biases could occur through (at least) two avenues. First, affect associated with

the party is transferred to the content of the sentence (consistent with an ”affect transfer”

from a liked or disliked group to the stimulus in question). Alternatively, it could be

that individuals make inferences based on other information they have about the parties

(consistent with the ”reputational premium,” e.g. Sniderman and Stiglitz (2012)). My

data are insufficient to distinguish between these two accounts, but understanding how

moral judgments are made in more realistic political settings where partisanship is (often)

known will help to paint a more complete picture of how politics becomes moralized.

3 Data and Methods

3.1 Subject Recruitment

In order to test my claims about the differences in individual moral perception, I collected

data from Amazon’s Mechanical Turk. Mechanical Turk data has been used in a growing

number of social science applications, and a great many findings from more traditional

samples have been replicated using Mechanical Turk workers (Berinsky, Huber and Lenz,

2012; Paolacci and Chandler, 2014).9 My data were collected in January of 2014.

A group of ”Turkers” filled out an initial screening questionnaire in exchange for a

small monetary incentive as well as the opportunity to participate in the second part of the

study and receive a larger monetary reward for satisfactory participation. In the screen-9There are many justifiable concerns about Mechanical Turk samples. Mechanical Turk is well-suited forthe purposes of this study for several reasons. First, the task (rating a series of sentences) closely mirrorsthe kinds of tasks for which Mechanical Turk was originally designed (crowd-sourcing type ”human intel-ligence tasks”). There are some concerns that Turkers are acclimated to economic games and other typesof experimental manipulations. It should also be noted that, in many cases, results from Mechanical Turksamples closely mirror those obtained from other populations. However, my study does not employ anyof these kinds of manipulations, and it is unlikely that participants in the task outlined below would haveincentives to give untruthful answers or otherwise contaminate the data. One additional concern relatesto the degree of ”moral diversity” that exists among Turkers. In my own and others’ experience, Turkerstend to be younger, more educated, and more liberal than the general population. In an effort to maximizethe diversity of the Turker population, I made an effort to oversample Republican identifiers.

11

Page 12: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

ing questionnaire, individuals were asked their partisanship and several other questions

tapping political attitudes. They also filled out a version of the Moral Foundations Ques-

tionnaire.10 Turkers tend to lean left in their politics, and although I made an effort to

contact an equal number of Democrats and Republicans, the final dataset includes sub-

stantially more Democratic raters than Republicans (114 and 63 respectively).

A week or so after completing the screening questionnaire, individuals who quali-

fied11 were invited to participate in the second part of the study where they were asked

to rate sentences drawn from the Congressional Record.

3.2 Sampling Political Texts

The sentences were sampled from several different topic areas. I tried to balance three

considerations when pulling the sample of sentences from the Congressional Record. First,

I wanted an equal number of Republican and Democratic statements, second I wanted

to compare statements that were roughly similar in their substantive content, and finally

I wanted some variance in the emotional and moral content of the sampled sentences.

Ultimately, I constructed a stratified sample from several main issue areas: Abortion,

Commerce, Defense, Education, Health Care, Immigration, Taxes, and Terrorism.12

Furthermore, I oversampled sentences that contained morally charged words.13 In

10I had individuals fill out a slightly modified version of the MFQ. See Appendix A for question wordings.11Only Turkers who passed the attention checks in the screening survey were qualified to participate in the

follow-up study. I also made some attempt to oversample Republicans in the screening survey throughquotas that appear to have not worked (the survey was supposed to cap participation by Democrats andcontinue collecting responses from Republicans).

12The topic classifications were based on the section of the Congressional Record that the speeches were pulledfrom. The Abortion topic included sentences from any speech falling within a section of the CongressionalRecord with ”abortion” in its title. The Defense topic included speeches made during the scheduled de-bate over the Department of Defense Appropriations Act on July 18, 2012. The Commerce topic includedspeeches made on May 8, 2012 during the debate over the Commerce, Justice, Science, and Related Agen-cies Appropriations Act.

13Morally charged words were identified from a slightly pared down version of the Moral FoundationsDictionary available at www.yourmorals.org. In an effort to minimize false-positives, I excluded severalphrases from the data where a putatively ”moral” word as defined by the Moral Foundations Dictionaryis used in a procedural or parliamentary way. I excluded the phrase ”balance of my time” as ”balance”

12

Page 13: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

each topic area, I oversampled sentences with moralized language from speeches made

by Members of Congress from each party. I made an effort to strip out purely parlia-

mentary and procedural language from the population of sentences prior to pulling the

sample. Ultimately, I received usable ratings on 1,074 unique sentences. Sentences were

presented to the Turkers such that some would be rated many times. The average sen-

tence was rated almost 3 times, but many sentences were rated only once and a few sen-

tences were rated five or more times.

3.3 Procedure

For the first set of sentences, the Turkers were informed that the sentences were taken

from speeches made by members of Congress, but they were not given any additional

information about the speakers’ identities. For the last five sentences that they rated, they

were provided with the partisanship of the speaker (e.g. ”A Republican representative

said: ...”). In all, the subjects recorded nearly 2,700 ratings.

Respondents were then asked several questions about each sentence:

1. How strong was your emotional response to reading the sentence? (7 point scalefrom ”Very strong negative reaction” to ”Very strong positive reaction” with a mid-point of ”Neither positive nor negative reaction”);

2. To what extent does this sentence connect to your own beliefs about fundamentalquestions of right and wrong? (5 point scale from ”Not at all” to ”A great deal”)14

3. (Only for individuals who indicated that the sentence had some moral content) Ifyou had to choose one from the list below, would you say this sentence refers to:

Norms of harm or care (e.g. unkindness or kindness, causing pain to another orcaring for someone in pain)

here is being used in a non-moral context. Other phrases excluded were, ”health care”, ”point of order”,”in order”, ”on the order”, ”executive order.” I entirely excluded ”law” and ”submi*” from the dictionaryas these words were most often used in non-moral ways.

14This item was adapted from one of the items in Linda Skitka’s moral conviction scale (Skitka, Baumanand Sargis, 2005; Skitka, 2010).

13

Page 14: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Norms of fairness or justice (e.g. cheating or honesty, reducing equality or ”lev-eling the playing field”)

Norms of loyalty (e.g. betrayal of a group or being a good team player)

Norms of respecting authority (e.g. subversion or insubordination, showingproper respect for authority

Norms of purity (e.g. degrading or disgusting acts, sanctified or ennobling acts)

Some other moral norm (please specify):

4 Results

4.1 Differential Moral Evalutaion

To test whether individual differences in the moral foundations measures are related to

the subjects’ ratings of the content of each sentence, I first fit a model to estimate the moral

categorization of each sentence. These models produced estimates of the degree to which

each sentence fit within a particular moral domain.

We can think of individual categorizations as composed of two parts. The first part is

a function of the objective or unbiased content (αj),15 and the second part is a function of

subjective elements unique to each subject (βi+δij). This second part of the categorization

can be broken down further into two elements. The first is stable to the individual subject

(βi) and the second is a function of the interaction between the subject and the text (δij).

This is expressed mathematically below in a straightforward analysis of variance model:

yij = αj + βi + δij

The equation above naturally lends itself to a hierarchical framework where we con-

sider each constituent part of the expression to be itself a function of higher level factors.

For example, the sentence-specific component, αj , could be modeled as a function of the

15It might be more appropriate to label this as the ”consensus” component.

14

Page 15: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

topic of the sentence, the partisan affiliation of the sentence’s author, and other sentence-

level covariates. The βi could be written as a function of individual subject demographics

that might be expected to affect their evaluations.16

Following the framework outlined above, the specific moral category was modeled

as a function of sentence specific-factors (the partisanship of the author, the issue area,

a measurement of the ”polarity” of the sentence based on the words used,17 a set of

variables derived from the Moral Foundations Dictionary that hold the counts of vari-

ous morally charged words in the different moral domains) and a set of subject-specific

factors (age, education, party, sex, moral foundations scores). The subject-sentence inter-

action term controlled for whether or not individuals were cued with the partisanship

of the sentence’s author (and specifically whether or not the partisanship of the author

matched the partisanship of the subject).18

αj = a0j + a1 ∗ AuthPartyj + a2 ∗ Issuej + a3 ∗ Polarityj + a4 ∗ Polarity2j +∑12m=1Am ∗MoralWordsjm

βi = b0 + b1 ∗ Agei + b2 ∗ Educi + b3 ∗ SubjPartyi +∑5

m=1Bm ∗MFjm

δij = d1 ∗ Cueij ∗ I(AuthPartyj = SubjPartyi) + d2 ∗ Cueij ∗ I(AuthPartyj 6=

SubjPartyi)

Pr(yij = 1) = Φ(αj + βi + δij)

Individual subject classifications were fit with a multinomial distribution with six cat-

egories (the five moral domains an other/non-moral category). The model produced

estimates of the probability that each sentence belonged to one of the six categories (with

16In Chapter ??, I use the results from this model to difference out the subject-specific component in a kindof ”crowd-sourced” content analysis to code the moralized language used in the Congressional Record.

17Polarity was calculated by processing the sampled sentences through the qdap package in R 3.0. Thealgorithm uses affectively charged words and their modifiers to calculate a score for each sentence.

18All of the results reported below are robust to restricting the data to the first fifteen ratings where nosource cue was given.

15

Page 16: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

the probabilities summing to unity). The models were estimated in JAGS 3.0 (Plummer

et al., 2003) with vague priors centered around zero for the hyperparameters. The model

code can be found in Appendix B.

The full results for the models used to generate the sentence categorizations can be

found in Appendix C. A few points from the estimation are worth noting. Individual

differences on the moral foundations did not seem to systematically affect how the sen-

tences were coded. Only one of the coefficients associated with the moral foundations of

the individual coders was significantly different from zero. This means, for example, that

coders who are particularly sensitive to violations of the Care/Harm foundation were no

more likely to code a sentence as belonging to that domain than coders who place rel-

atively less emphasis on that domain. At least when it comes to moral categorization,

individuals’ own moral outlooks do not seem to bias responses in a systematic way. The

one exception to this was in the Loyalty domain. There was a significantly positive rela-

tionship between an individuals’ score on the Loyalty foundation and the likelihood of

classifying a sentence as belonging to the Loyalty domain.

At the sentence-level, the topic of the sentence is a reliably significant factor in deter-

mining the moral domain to which it belongs. For example, sentences that were drawn

from speeches about abortion were more likely to be classified as belonging to the Care/Harm

and Sanctity/Degradation categories. The specific moral words used in each sentence

were also important. On this score, the Moral Foundations Dictionary seems to perform

relatively well as a predictor of the appropriate moral domain. For the most part, the

coefficients associated with the different categories in the Moral Foundations Dictionary

behave as we would expect. Sentences that contain the words associated with each foun-

dation were generally significantly more likely to be classified as belonging to that do-

main.19

19The Loyalty and Sanctity domain-specific words from the Moral Foundations Dictionary were least asso-ciated with subjects’ classifications. This might be due in part to the relatively small amount of Loyalty

16

Page 17: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Using the αj estimates from the model, we can identify sentences that belong to each

domain. In the examples below, I have selected a few of the sentences with the highest

baseline likelihood of being classified in a particular domain.

1. Care/Harm Domain:

”Tens of thousands suffer more from wounds both visible and invisible.”

”As a daughter and wife of physicians, I am shocked that we would so quicklydismiss the judgment of our country’s medical personnel and families in makingthe best decision to preserve the health and lives of their loved ones.”

2. Fairness/Cheating Domain:

”We believe that our nation’s disabled veterans and wounded warriors havewaited long enough for access to pools and spas.”

”Unfortunately, unfair trade practices from countries like China make this in-creasingly difficult.”

3. Loyalty/Betrayal Domain

”They are at war with us by supporting and funding the very terrorists that weare up against.”

”The Army, the National Guard, and the National Guard Association stronglyoppose this amendment.”

4. Respect/Subversion Domain

”It’s about respect for the rule of law.”

”It is time we did the jobs we swore allegiance to the Constitution to do, even ifothers will not.”

”For those watching at home, what would happen to them if they ignored asummons for jury duty?”

5. Sanctity/Degradation Domain

”Without it, people could be forced to participate in something they stronglybelieve to be morally wrong.”

”But I also believe it’s morally wrong to take the taxpayer dollars of millions ofpro-life Americans and use it to fund a procedure that they find morally offensive.”

”In other words, the victim of rape had to show wounds and other matters thatshe really was forcibly raped before she could be covered, but they changed thatbecause there was such an outcry.”

and Sanctity language in the texts.

17

Page 18: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Party Topic n Care Fair Loyal Respect SanctityR Abortion 224 0.24 0.28 0.04 0.02 0.06D Abortion 222 0.23 0.36 0.03 0.02 0.03R Defense 216 0.13 0.26 0.12 0.07 0.00D Defense 222 0.17 0.32 0.1 0.06 0.00R Commerce 219 0.11 0.43 0.07 0.06 0.00D Commerce 200 0.1 0.5 0.06 0.06 0.01

Table 1: Mean classification (scaled from 0 to 1) for sentences by party and topic

Table 1 presents some descriptive statistics for the categorization measures. The par-

ties were about evenly matched when it comes to the Care/Harm domain. Sentences

taken from speeches by Democrats were slightly more likely to be classified as belonging

to the Fairness domain than those from Republican speeches. Republicans had a slight

edge when it comes to the binding foundations, but the differences between the parties

are not very dramatic for this sample. The other striking thing to note about the classifi-

cations is the relative rarity of sentences belonging to the Loyalty, Respect, and especially

Sanctity foundations.

Given these measurements of the domain-classification for each sentence, we can test

hypotheses about differential moral perception among the of coders. If it is true that

the moral foundations structure evaluations and moral judgment in significant ways, we

would expect individuals who score highest on a particular foundation to report more

intense emotional reactions when reacting to sentences that are classified in the same do-

main. For example, an individual who places relatively more weight on the Care/Harm

domain is expected to have a more intense emotional reaction to a statement relating to

ideas about caring for the sick when compared against an individual who does not place

as much emphasis on the Care/Harm domain.

To determine whether or not individuals are biased in their reactions to moral content

based on their own moral outlook, I estimated a set of ordered probit regressions. The

dependent variable was the subjects’ self-reported intensity of emotional reaction to the

18

Page 19: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

content of the sentence. On the right-hand side of the regression equation, I included

controls for the demographics of the rater (to capture systematic differences in emotional

response) and some sentence-level controls to capture baseline differences in the content

of the sentences. The key explanatory variables of interest are the main effect of the moral

foundation of the rater, the main effect of the moral categorization of the sentence, and

the interactive effect of the rater’s score on the moral foundation and the sentence’s cate-

gorization score.

The full regression results can be found in Appendix D. Figures 1 and 2 show the pre-

dicted probabilities generated from the model. Figure 1 shows the predicted probabilities

of a rater reporting an extreme emotional reaction to the sentence (either ”Very Negative”

or ”Very Positive”), and Figure 2 shows the predicted probability of giving a non-neutral

emotional response.20 The solid lines show how the probabilities change for individuals

who place relatively more emphasis on a particular foundation (the mean of the raters

plus 1.5 standard deviations). The dashed lines show the predicted probabilities for in-

dividuals who place less emphasis on a particular foundation (mean minus 1.5 standard

deviations).

For almost every foundation (excepting the Sanctity foundation), there is the expected

positive interaction between the rater’s score on the relevant moral foundation and the

categorization of the sentence as belonging to that moral domain. The coders reported

having more intense emotional reactions to the sentences that touched on particular do-

mains of morality that they care about. The result is statistically significant for the Care,

Loyalty and Respect foundations. For sentences belonging to the Fairness21 foundation,

20Predicted probabilities were calculated by holding all other variables in the model at their mean values.I used a bootstrapping procedure to calculate the predicted probabilities as the delta method was giv-ing nonsensical results (probabilities greater than 1 and less than 0). The models were estimated withclustered standard errors by rater to account for the nested structure of the data.

21Further evidence that there might be something wrong with the way that the Fairness/Cheating dimen-sion is being operationalized.

19

Page 20: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

0.0

0.2

0.4

0.6

0.8

1.0

Low care content High care content

Low careHigh care

Pro

b(E

xtre

me

Em

otio

nal R

espo

nse)

0.0

0.2

0.4

0.6

0.8

1.0

Low fair content High fair content

Low fairHigh fair

0.0

0.2

0.4

0.6

0.8

1.0

Low loyal content High loyal content

Low loyalHigh loyal

Pro

b(E

xtre

me

Em

otio

nal R

espo

nse)

0.0

0.2

0.4

0.6

0.8

1.0

Low respect content High respect content

Low respectHigh respect

0.0

0.2

0.4

0.6

0.8

1.0

Low sanc content High sanc content

Low sancHigh sanc

Figure 1: Emotional Reactions by Rater Moral Foundations and Topic Categorization. Theplots show the predicted probability of giving the most extreme emotional response giventhe rater’s moral foundation score and the categorization of the sentence content.

20

Page 21: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

0.0

0.2

0.4

0.6

0.8

1.0

Low care content High care content

Low careHigh care

Pro

b(E

xtre

me

Em

otio

nal R

espo

nse)

0.0

0.2

0.4

0.6

0.8

1.0

Low fair content High fair content

Low fairHigh fair

0.0

0.2

0.4

0.6

0.8

1.0

Low loyal content High loyal content

Low loyalHigh loyal

Pro

b(E

xtre

me

Em

otio

nal R

espo

nse)

0.0

0.2

0.4

0.6

0.8

1.0

Low respect content High respect content

Low respectHigh respect

0.0

0.2

0.4

0.6

0.8

1.0

Low sanc content High sanc content

Low sancHigh sanc

Figure 2: Emotional Reactions by Rater Moral Foundations and Topic Categorization. Theplots show the predicted probability of giving a non-neutral emotional response given therater’s moral foundation score and the categorization of the sentence content.

21

Page 22: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

there was no statistically significant difference between individuals who are high and low

on the foundation and their emotional reaction to the content. The Sanctity/Degradation

foundation actually showed the opposite pattern.22

As a test of the robustness of the above findings, we can run a series of placebo-like

analyses. These tests involve looking for the ”wrong” interaction. For example, we could

look at the interaction between a sentence that was coded as belonging to the fairness

foundation and the coder’s score on the care foundation. The interaction between these

mismatched foundations should not be statistically significant. When these alternative

analyses are run almost none of the interactions emerge as significant. There were only

two exceptions out of the twenty different tests performed. There was a statistically

significant and negative interaction between sentences that were rated as belonging to

the Respect domain and the Care/Harm moral foundation (e.g. individuals who scored

highly on the Care/Harm foundation were significantly less like to register an emotional

response to sentences coded as belonging to the Respect domain). Additionally, there

was a statistically significant positive relationship between the sanctity foundation and

sentences that were coded as belonging to the respect foundation.23

4.2 The Emotional Content of Moral Evaluation

In line with the intuitionist model of moral judgement, there is a strong relationship be-

tween an individual’s emotional reaction to a text and her moral classification. A simple

bar graph reveals a strong relationship between an individual’s emotional reaction to a

particular sentence and his or her moral evaluation of that same text. Figure 3 shows the

bivariate relationship.

22This may be due to the rarity of language that was classified as belonging to the Sanctity foundation.23This relationship is not entirely unexpected due to the high level of conceptual overlap between these

two foundations. Indeed, I worried that these placebo tests would produce more statistically significantrelationships especially between the Care and Fairness foundations and the three binding foundations.

22

Page 23: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Not at allMoral

VeryMoral

Very Negative Neutral Very Positive

Figure 3: Moral Evaluation by Emotional Reaction. Bars show the average moral classifi-cations for each scale point of the emotional reaction item.

23

Page 24: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Emotional Reaction

Pro

babi

lity

of M

oral

Eva

luat

ion

−1.0 −0.5 0.0 0.5 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Figure 4: Moral Evaluation by Emotional Reaction. Predicted probability of maximummoral evaluation across the range of emotional reactions. The solid line shows the pre-dicted value and the shaded region shows the 95% confidence interval.

This bivariate relationship holds up when controls for rater demographics (age, sex,

education, partisanship) and characteristics of the sentences (topic, morally charged words,

affectively laden words) are added to the model. The full results of the regression are

available in Appendix E. In Figure 4, I show the predicted probability of registering

the maximum moral evaluation (sentence connects to the raters ”own fundamental ideas

about right and wrong” ”a great deal”) across the range of emotional reactions (coded

from ”Very negative” = -1 to ”Very positive” = 1).

The model allows for emotion to have an asymmetric effect, and indeed an asymmetry

emerges. Intense positive reactions are more likely to lead to moral classifications than

24

Page 25: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

intense negative reactions. This difference is statistically significant, and substantively

rather large. Raters who react ”very negatively” to a sentence have a predicted 0.27 prob-

ability of rating that sentence as connecting ”a great deal” to their own moral code. This

compares with a 0.62 probability for raters who react ”very positively.” The 95 percent

confidence interval for this difference ranges from a lower bound of 27 points to an upper

bound of 45 points.24

As expected, emotional reactions are strongly related to moral judgments. The asym-

metry in positive and negative affect is perhaps a little surprising. Especially given what

we know about loss aversion and the general primacy of negative affect, we might have

expected the asymmetry to run in the other direction with negative affect being a more

powerful moral motivator. One potential explanation for this counterintuitive asymmetry

might be found in the wording of the moral judgment item. Respondents were asked, ”To

what extent does this sentence connect to your own beliefs about fundamental questions

of right and wrong?” It is possible that some respondents interpreted this item as per-

taining only to statements that they agreed with. Future research along these lines might

consider prefacing this item with something like, ”Regardless of whether you personally

agree or disagree with the statement, ....”

4.3 Partisan Bias in Moral Evaluations

My third hypothesis relates to the ways in which individuals incorporate partisan cues

into their moral evaluations. For the last five sentences presented to each subject, I pro-

vided the partisanship of the statement’s author. This allows us to investigate how parti-

san biases might color moral and emotional reactions to the content.

One way in which this bias might manifest itself is in the emotional reactions to the

24The difference in predicted probabilities was generated through a bootstrapped simulation holding allother variables at their mean values.

25

Page 26: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

text itself. It could be that, when presented with an in-party cue, partisans will have a

stronger reaction than when presented with an out-party cue or no cue. This is straight-

forward to test with regression, and it does turn out to be the case. Holding other factors25

constant at their mean values, an in-party cue makes it about 4 percentage points more

likely that a rater will have a ”very strong” emotional reaction and about 8 percentage

points less likely that a rater will report having no emotional reaction. This effect isn’t

particularly large, but it is striking that the mere mention of partisanship has a robust

effect on emotional reactions. There is no corresponding effect for an out-party cue.

We could similarly examine the effect of the party cue on moral evaluations. Above, I

showed that emotional reactions were strongly related to moral evaluations, so we would

expect to find an at least indirect effect of partisan source cues on moral evaluations oper-

ating through the demonstrably positive effect of in-party cues on emotional reactions. It

also may be the case that there is a direct effect of party source cues on moral evaluations.

Fitting a regression of party source cues on moral evaluations leads to the opposite

pattern of effects as we saw in the previous analysis. Whereas same party cues seemed to

cause more extreme emotional reactions, it is out-party cues that do all of the work in this

model. Receiving an out-party cue, all else constant, was associated with a drop in the

probability of saying that a particular sentence was ”quite a bit” or ”a great deal” related

to one’s basic sense of morality. Unlike the emotional reactions model, the effect of parti-

san cues on moral evaluations seems to be concentrated among Republican identifiers.

25The, by now, standard set of rater-level (age, education, sex, partisanship) and sentence-level (topic, moralwords, polarity) controls. Full regression results are available in the appendix. The results presented werefit with clustered standard errors at the rater-level. The findings are robust to including sentence-levelfixed effects.

26

Page 27: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Pro

b(E

xtre

me

Em

otio

nal R

espo

nse)

−0.

15−

0.10

−0.

050.

000.

050.

100.

15

No Cue − Dem Cue No Cue − R Cue

Dem SubjRep Subj

Figure 5: Partisan Cues and Emotional Reactions. The plot shows the predicted differencein the probability of giving an extreme emotional reaction when receiving a partisan cue.The solid line shows the predicted effect for Republicans and the Dashed lines show thepredicted effects for Democrats. The vertical bars give the 95% confidence intervals.

27

Page 28: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Pro

b(E

xtre

me

Mor

al C

lass

ifica

tion)

−0.

15−

0.10

−0.

050.

000.

050.

100.

15

No Cue − Dem Cue No Cue − R Cue

Dem SubjRep Subj

Figure 6: Partisan Cues and Moral Evaluations. The plot shows the predicted differencein the probability of giving an extreme moral evaluation when receiving a partisan cue.The solid line shows the predicted effect for Republicans and the Dashed lines show thepredicted effects for Democrats. The vertical bars give the 95% confidence intervals.

28

Page 29: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

5 Discussion and Conclusion

In this chapter, I have investigated some of the factors that lead people to have divergent

moral judgments of the same stimuli. If we accept the premise that moral attitudes are

important in politics – and there seems to be a great deal of evidence that this is the case

(Skitka, 2010; Ryan, 2014a; Tetlock et al., 2000) – we are naturally led to turn our attention

to the causes of moral judgments. In the preceding analyses, I’ve argued that there are

at least three things at work: 1) stable individual differences cause differential emotional

reactions, 2) emotional reactions cause moral judgments, and 3) social identities system-

atically affect both emotional reactions and moral judgments.

I have shown that individuals are sensitive to morally congruent rhetoric. Differences

in moral perception are at least partly explainable by appealing to individual differences

in a small number of core moral domains. These individual differences in moral concern

(operationalized here with the moral foundations) are systematically related to variation

in the emotional reactions that individuals experience when exposed to political argu-

ments.

My results were also consistent with the claim that emotional reactions cause moral

judgements. One of the strongest predictors of individual moral classifications were their

self-reported emotional reactions. These findings are consistent with an intuitionist un-

derstanding of moral judgement.26

Politics occurs in the context of social identities and relationships, and my results show

that relevant social identities (in this case, partisanship) significantly affect emotional and

moral reactions. Because source cues were randomly presented, I am more confident

about the causal story here.27 Relevant information — knowing that the speaker is on

26The results I present here are not able to settle questions about the direction of the causal arrow by anymeans. Without credibly manipulating emotions, I do not have evidence of causation.

27The mechanism by which this effect occurs remains less clear. There are at least two competing explana-tions. First, it could be that the seeing one’s one party causes a flash of incidental affect that is then incor-

29

Page 30: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

your political ”team” — seems to lower the guard. This leads to partisans reporting more

intense emotional reactions when presented with statements from members of their po-

litical party.

In future work, it will be important to delve more into each of these claims. On the

idea that emotional reactions cause moral judgments, it may be the case that personality

factors such as emotional stability mediate the relationship between stimulus and moral

judgment. If it is true that emotions are causal agents in moral judgment, any factors that

influence the degree to which individuals experience their emotions are likely to have an

effect on moral judgment (and by extension political tolerance and activism).

Another rich vein of future research lies in the ways in which politicians strategically

use moral language to reach their supporters. As we understand more about the incen-

tives facing politicians in their use of moral rhetoric, we might better understand the

natural bounds placed on elite influence. If it is true that moral intuitions are stable and

relatively fixed (as I contend), political strategy must work within the bounds set by mass

dispositions. Moral rhetoric is not going away any time soon in politics. To the extent

that we fail to understand the dynamics at play in individual minds and mass publics,

our understanding of political behavior will be that much poorer.

In the next chapter, I turn to the patterns of moral language in politics. Drawing on

the results from the model above, I estimate the trends in moral language over time and

across a variety of different issues.

porated into downstream political reasoning. Under this account, we would expect to see the strongestpositive effect among those people who have the warmest feelings about their political party. Alter-natively, it could be that individuals are able to use the party label as a heuristic to infer more aboutrelatively vague statements. If this alternative account were true, we might expect to see an interactionbetween the content of the statement and the partisanship of the speaker.

30

Page 31: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

References

Barker, David C and James D Tinnick. 2006. “Competing visions of parental roles andideological constraint.” American Political Science Review 100(02):249–263.

Ben Nun, Pazit. 2010. The Moral Public: Moral Judgment and Political Attitudes PhDthesis Stony Brook University.

Berinsky, Adam J, Gregory A Huber and Gabriel S Lenz. 2012. “Evaluating online labormarkets for experimental research: Amazon. com’s Mechanical Turk.” Political Analysis20(3):351–368.

Bloom, Pazit Ben-Nun. 2013. “The Publics Compass Moral Conviction and Political Atti-tudes.” American Politics Research 41(6):937–964.

Clifford, Scott and Jennifer Jerit. 2013. “How Words Do the Work of Politics: MoralFoundations Theory and the Debate over Stem Cell Research.” The Journal of Politics75(03):659–671.

Converse, Philip E. 1964. “The Nature of Belief Systems in Mass Publics.” Ideology anddiscontent pp. 206–61.

Eskine, Kendall J, Natalie A Kacinik and Jesse J Prinz. 2011. “A Bad Taste in the MouthGustatory Disgust Influences Moral Judgment.” Psychological Science 22(3):295–299.

Feldman, Stanley. 1988. “Structure and consistency in public opinion: The role of corebeliefs and values.” American Journal of Political Science pp. 416–440.

Feldman, Stanley and Karen Stenner. 1997. “Perceived threat and authoritarianism.” Po-litical Psychology 18(4):741–770.

Gerber, Alan S, Gregory A Huber, David Doherty, Conor M Dowling and Shang E Ha.2010. “Personality and political attitudes: Relationships across issue domains and po-litical contexts.” American Political Science Review 104(01):111–133.

Giner-Sorolla, Roger. 2013. Judging passions: Moral emotions in persons and groups. Psychol-ogy Press.

Goren, Paul. 2001. “Core principles and policy reasoning in mass publics: A test of twotheories.” British Journal of Political Science 31(1):159–177.

Goren, Paul. 2013. On voter competence. Oxford University Press.

Graham, Jesse, Jonathan Haidt and Brian A Nosek. 2009. “Liberals and conservativesrely on different sets of moral foundations.” Journal of personality and social psychology96(5):1029.

31

Page 32: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Haidt, Jonathan. 2001. “The emotional dog and its rational tail: a social intuitionist ap-proach to moral judgment.” Psychological review 108(4):814.

Haidt, Jonathan. 2007. “The new synthesis in moral psychology.” science 316(5827):998–1002.

Haidt, Jonathan. 2012. The righteous mind: Why good people are divided by politics and religion.Random House LLC.

Haidt, Jonathan, Jesse Graham and Craig Joseph. 2009. “Above and below left–right:Ideological narratives and moral foundations.” Psychological Inquiry 20(2-3):110–119.

Hibbing, John R, Kevin B Smith and John R Alford. 2013. Predisposed: Liberals, Conserva-tives, and the Biology of Political Differences. Routledge.

Hillygus, D Sunshine and Todd Shields. 2008. “The persuadable voter.” Princeton Univer-sity, New Jersey .

Hochschild, Jennifer L. 1986. What’s fair: American beliefs about distributive justice. HarvardUniversity Press.

Inbar, Yoel, David A Pizarro and Paul Bloom. 2009. “Conservatives are more easily dis-gusted than liberals.” Cognition and Emotion 23(4):714–725.

Inbar, Yoel, David Pizarro, Ravi Iyer and Jonathan Haidt. 2012. “Disgust sensitivity, polit-ical conservatism, and voting.” Social Psychological and Personality Science 3(5):537–544.

Inglehart, Ronald. 1997. Modernization and postmodernization: Cultural, economic, and polit-ical change in 43 societies. Vol. 19 Cambridge Univ Press.

Jacoby, William G. 2006. “Value choices and American public opinion.” American Journalof Political Science 50(3):706–723.

Janoff-Bulman, Ronnie. 2009. “To provide or protect: Motivational bases of political lib-eralism and conservatism.” Psychological Inquiry 20(2-3):120–128.

Jost, John T. 2006. “The end of the end of ideology.” American Psychologist 61(7):651.

Kinder, Donald R and Cindy D Kam. 2010. Us against them: Ethnocentric foundations ofAmerican opinion. University of Chicago Press.

Koleva, Spassena P, Jesse Graham, Ravi Iyer, Peter H Ditto and Jonathan Haidt. 2012.“Tracing the threads: How five moral concerns (especially Purity) help explain culturewar attitudes.” Journal of Research in Personality 46(2):184–194.

Kollman, Ken, John H Miller and Scott E Page. 1992. “Adaptive parties in spatial elec-tions.” American Political Science Review 86(04):929–937.

32

Page 33: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Kollman, Ken, John H Miller and Scott E Page. 1998. “Political parties and electoral land-scapes.” British Journal of Political Science 28(01):139–158.

Lakoff, George. 2010. Moral politics: How liberals and conservatives think. University ofChicago Press.

Leeper, Thomas J and Rune Slothuus. 2014. “Political Parties, Motivated Reasoning, andPublic Opinion Formation.” Political Psychology 35(S1):129–156.

Leonard, Diana J, Wesley G Moons, Diane M Mackie and Eliot R Smith. 2011. “Were madas hell and were not going to take it anymore: Anger self-stereotyping and collectiveaction.” Group Processes & Intergroup Relations 14(1):99–111.

Li, Qiong and Marilynn B Brewer. 2004. “What does it mean to be an American? Patrio-tism, nationalism, and American identity after 9/11.” Political Psychology 25(5):727–739.

Lodge, Milton and Charles S Taber. 2013. The rationalizing voter. Cambridge UniversityPress.

Mondak, Jeffery J, Matthew V Hibbing, Damarys Canache, Mitchell A Seligson andMary R Anderson. 2010. “Personality and civic engagement: An integrative frame-work for the study of trait effects on political behavior.” American Political Science Review104(01):85–110.

Pagano, Sabrina J and Yuen J Huo. 2007. “The Role of Moral Emotions in PredictingSupport for Political Actions in Post-War Iraq.” Political Psychology 28(2):227–255.

Paolacci, Gabriele and Jesse Chandler. 2014. “Inside the Turk Understanding MechanicalTurk as a Participant Pool.” Current Directions in Psychological Science 23(3):184–188.

Pizarro, David A and Paul Bloom. 2003. “The intelligence of the moral intuitions: Acomment on Haidt (2001).”.

Plummer, Martyn et al. 2003. JAGS: A program for analysis of Bayesian graphical mod-els using Gibbs sampling. In Proceedings of the 3rd International Workshop on DistributedStatistical Computing (DSC 2003). March. pp. 20–22.

Prinz, Jesse. 2006. “The emotional basis of moral judgments.” Philosophical Explorations9(1):29–43.

Rai, Tage Shakti and Alan Page Fiske. 2011. “Moral psychology is relationship regulation:moral motives for unity, hierarchy, equality, and proportionality.” Psychological review118(1):57.

Roccas, Sonia, Shalom H Schwartz and Adi Amit. 2010. “Personal value priorities andnational identification.” Political Psychology 31(3):393–419.

33

Page 34: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Ryan, Timothy J. 2012. Flipping the Moral Switch: The Divisive Core of Moral Frames.Midwest Political Science Association.

Ryan, Timothy J. 2014a. “No Compromise: Political Consequences of Moralized Atti-tudes.” Unpublished Working Paper .

Ryan, Timothy J. 2014b. “Reconsidering Moral Issues in Politics.” Journal of Politics76(2):380–397.

Schnall, Simone, Jennifer Benton and Sophie Harvey. 2008. “With a clean consciencecleanliness reduces the severity of moral judgments.” Psychological science 19(12):1219–1222.

Schwartz, Shalom H, Gian Vittorio Caprara and Michele Vecchione. 2010. “Basic personalvalues, core political values, and voting: A longitudinal analysis.” Political Psychology31(3):421–452.

Sinnott-Armstrong, Walter. 2008. “Moral psychology. Vol. 1-3.”.

Skitka, Linda J. 2010. “The psychology of moral conviction.” Social and Personality Psy-chology Compass 4(4):267–281.

Skitka, Linda J, Christopher W Bauman and Edward G Sargis. 2005. “Moral conviction:another contributor to attitude strength or something more?” Journal of personality andsocial psychology 88(6):895.

Sniderman, Paul M and Edward H Stiglitz. 2012. The reputational premium: A theory ofparty identification and policy reasoning. Princeton University Press.

Stenner, Karen. 2005. The authoritarian dynamic. Cambridge University Press.

Tajfel, Henri and John C Turner. 1979. “An integrative theory of intergroup conflict.” Thesocial psychology of intergroup relations 33(47):74.

Terrizzi Jr, John A, Natalie J Shook and W Larry Ventis. 2010. “Disgust: A predictorof social conservatism and prejudicial attitudes toward homosexuals.” Personality andindividual differences 49(6):587–592.

Tetlock, Philip E, Orie V Kristel, S Beth Elson, Melanie C Green and Jennifer S Lerner.2000. “The psychology of the unthinkable: taboo trade-offs, forbidden base rates, andheretical counterfactuals.” Journal of personality and social psychology 78(5):853.

Valdesolo, Piercarlo and David DeSteno. 2006. “Manipulations of emotional contextshape moral judgment.” Psychological science 17(6):476–477.

Valentino, Nicholas A, Ted Brader, Eric W Groenendyk, Krysha Gregorowicz and Vin-cent L Hutchings. 2011. “Election nights alright for fighting: The role of emotions inpolitical participation.” Journal of Politics 73(1):156–170.

34

Page 35: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Weber, Christopher R and Christopher M Federico. 2013. “Moral foundations and hetero-geneity in ideological preferences.” Political Psychology 34(1):107–126.

Wilson, Edward O. 2012. The social conquest of earth. WW Norton & Company.

35

Page 36: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

A Moral Foundations Questionnaire wordings

I slightly adapted the set of Moral Foundations Questionnaire items for this study. Specif-

ically, I excluded a few items that make explicit mentions of God.

The first part of the MFQ asks individuals: ”When you decide whether something is

right or wrong, to what extent are the following considerations relevant to your think-

ing?” Respondents are presented with a series of 6-point Likert items from ”Not at all

relevant” to ”Extremely relevant.”

Respondents were presented with the items in random ordering, but for ease of pre-

sentation, I have organized them into the different domains.

Care/Harm Items

1. Whether or not someone cared for someone weak or vulnerable

2. Whether or not someone was cruel

3. Whether or not someone suffered emotionally

Fairness/Cheating Items

1. Whether or not some people were treated differently than others.

2. Whether or not someone was denied his or her rights

3. Whether or not someone acted unfairly

Loyalty/Betrayal Items

1. Whether or not someone showed love for his or her country

2. Whether or not someone did something to betray his or her group

3. Whether or not someone showed a lack of loyalty

Respect/Subversion Items

1. Whether or not someone conformed to the traditions of society

2. Whether or not an action caused chaos or disorder

Sanctity/Degradation Items

36

Page 37: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

1. Whether or not someone violated standards of purity and decency

2. Whether or not someone did something disgusting

The second part of the MFQ asks individuals to indicate their level of agreement or

disagreement to a set of statements. Respondents are presented with a series of 6-point

Likert items from ”Strongly disagree” to ”Strongly agree.”

Care/Harm Items

1. Compassion for those who are suffering is the most crucial virtue

2. One of the worst things a person could do is hurt a defenseless animal

3. It can never be right to kill a human being

Fairness/Cheating Items

1. I think it is morally wrong that children of rich parents inherit a lot of money whilechildren of poor parents inherit very little.

2. Justice is the most important requirement for a society

Loyalty/Betrayal Items

1. People should be loyal to their family members, even when they have done some-thing wrong

2. It is more important to be a team player than to express oneself

Respect/Subversion Items

1. Men and women each have different roles to play in society

2. Respect for authority is something all children need to learn

3. If I were a soldier and disagreed with my commanding officer, I would obey anywaybecause that is my duty

Sanctity/Degradation Items

1. People should not do things that are disgusting, even if no one is harmed

2. I would call some acts wrong on the grounds that they are unnatural

37

Page 38: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

B JAGS Model for sentence classification

for (D in 1:N.domain) {####Domain categorizationfor (i in 1:N.coder) { ###Coder level effect###function of demographics and MF

beta[i,D] ˜ dnorm(mu.beta[i,D], tau.beta[D])mu.beta[i,D] <- inprod(X.coder[i,], b.coder[,D])tau[i,D] ˜ dgamma(1, 1)

}

for (j in 1:N.sent) { ###Sentence level effects###function of topic, party of speaker, domain specific moral words

alpha[j,D] ˜ dnorm(mu.alpha[j,D], tau.alpha[D])mu.alpha[j,D] <- inprod(X.sent[j,], b.sent[,D])

for (i in 1:N.coder) { ###delta and DVdelta[i,j,D] <- p.cue[1,D]*cue.same[j,i] +p.cue[2,D]*cue.diff[j,i]}

}}

for (j in 1:N.obs) {domain[j] ˜ dcat(prob[j,1:7])for (D in 1:N.domain) {prob.star[j,D] <- exp(alpha[inds[j,1],D] +beta[inds[j,2],D]+ delta[inds[j,2],inds[j,1],D])}prob[j,1:N.domain] <- prob.star[j,1:N.domain]/sum(prob.star[j,1:N.domain])}

###Coefficient for second levelsfor (D in 1:N.domain) {for (j in 1:K.coder) {b.coder[j,D] ˜ dnorm(0, 1)

38

Page 39: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

}for (j in 1:K.sent) {b.sent[j,D] ˜ dnorm(0, 1)}for (j in 1:2) {p.cue[j,D] ˜ dnorm(0, 1)}

tau.beta[D] ˜ dgamma(.1, .1)tau.alpha[D] ˜ dgamma(.1, .1)}

}

C Categorization Results

The categorization model includes three components: sentence-level factors, rater-level

factors, and a sentence-rater interaction term. The model was fit in JAGS 3.0 (Plummer

et al., 2003). Geweke tests of convergence indicate that the Gibbs’ Sampler had settled on

the approximate posterior distribution. The tables below report the coefficients for each

component of the model.

Table 2 contains the results of the sentence-level factors.

Table 3 contains the results of the rater-level factors.

Table 4 contains the coefficients for the sentence/rater interactions.

39

Page 40: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Care Fairness Loyalty Respect SanctityRepublican -0.024 -0.198* 0.144 0.103 0.334

Topic: Abortion 0.532* -0.336* -0.268* -0.562* 1.092*Topic: Defense 0.192 -0.392* 0.277* 0.081 -0.507*

Polarity -0.58* -0.015 0.752* 0.531* -0.753*Polarity-squared 0.428 -0.578* -0.287 0.03 0.272

Care Words (pos) 0.378* -0.171 0.207 -0.204 0.079Care Words (neg) -0.078 -0.1 0.049 -0.05 -0.004Fair Words (pos) -0.302 0.407* -0.473* -0.45* -0.196Fair Words (neg) -0.414 0.89* -0.98 -0.737 -0.089

Loyal Words (pos) -0.005 -0.05 0.135 -0.049 -0.166Loyal Words (neg) -0.449 0.4 0.18 -1.198* -1.111

Respect Words (pos) 0.143 -0.07 -0.12 0.281* -0.19Respect Words (neg) -0.69* 0.009 0.503 0.46 -0.525Sanctity Words (pos) 0.549* -0.298 -1.494* -0.604 0.647Sanctity Words (neg) 0.63 -0.281 -0.868 -0.733 0.382General Words (pos) -0.356* 0.197 0.231 -0.13 -0.641General Words (neg) -0.458 -0.184 0.279 0.115 0.648

Table 2: Coefficient estimates for sentence-level factors in the categorization model. As-terisks indicate statistical significance at the 0.05 level (95 percent credible intervals donot include zero).

Care Fairness Loyalty Respect SanctityRepublican 0.049 -0.022 0.015 -0.018 -0.205

Age -0.011 -0.028 0.078 0.081 -0.135Education -0.06 0.092* -0.029 0.054 -0.07

Female 0.064 0.004 0.082 -0.173* -0.178MF: Care 0.044 0.03 -0.027 -0.041 -0.072

MF: Fairness -0.034 0.029 -0.02 0.033 0.085MF: Loyalty -0.009 0.008 0.131* 0.05 -0.028MF: Respect -0.015 -0.01 0.036 0.115 -0.045MF: Sanctity -0.097 0.071 0.103 0.019 0.191

Table 3: Coefficient estimates for rater-level factors in the categorization model. Aster-isks indicate statistical significance at the 0.05 level (95 percent credible intervals do notinclude zero).

40

Page 41: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Care Fairness Loyalty Respect SanctityIn-party cue -0.08 0.019 0.035 -0.04 -0.549

Out-party cue -0.138 0.181* 0.094 0.036 -0.127

Table 4: Coefficient estimates for rater/sentence interaction factors in the categorizationmodel. Asterisks indicate statistical significance at the 0.05 level (95 percent credible in-tervals do not include zero).

D Moral Domain and Rater Moral Foundations Regression

Tables

E Emotional Reaction and Moral Evaluation Regression Ta-

ble

See Table 6.

F Partisan Cues Regression

41

Page 42: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Table 5: Ordered Probit: Emotional Reaction

Care Fairness Loyalty Respect Sanctity

Subj: MF −0.130∗∗∗ 0.177∗∗∗ −0.117∗∗ 0.105∗∗∗ 0.018(0.042) (0.053) (0.046) (0.038) (0.031)

Sent: Domain 1.340∗∗∗ 1.201∗∗∗ 2.192∗∗∗ −0.354 2.256∗∗∗

(0.216) (0.271) (0.530) (0.507) (0.398)MF * Domain 0.461∗∗∗ −0.039 1.741∗∗∗ 1.051∗∗ −0.761∗

(0.149) (0.138) (0.490) (0.519) (0.450)Subj: Age 0.014 0.016 0.014 0.009 0.013

(0.021) (0.021) (0.021) (0.021) (0.021)Subj: Educ 0.005 0.010 0.007 −0.007 0.007

(0.017) (0.017) (0.017) (0.017) (0.017)Subj: Female −0.062 −0.065 −0.073∗ −0.056 −0.077∗

(0.044) (0.044) (0.044) (0.044) (0.044)Subj: Republican −0.035 0.068 −0.003 −0.097∗∗ −0.009

(0.050) (0.046) (0.047) (0.049) (0.050)Sent: Republican −0.070 −0.080∗ −0.062 −0.038 −0.073∗

(0.043) (0.044) (0.044) (0.043) (0.044)Sent: Abortion 0.370∗∗∗ 0.354∗∗∗ 0.419∗∗∗ 0.352∗∗∗ 0.374∗∗∗

(0.055) (0.056) (0.053) (0.055) (0.054)Sent: Defense 0.218∗∗∗ 0.174∗∗∗ 0.218∗∗∗ 0.238∗∗∗ 0.242∗∗∗

(0.055) (0.059) (0.057) (0.054) (0.054)Sent: Polarity −0.657∗∗ −0.715∗∗ −0.917∗∗∗ −0.588∗ −0.701∗∗

(0.307) (0.301) (0.326) (0.309) (0.304)Sent: Pol-sq −0.305 −1.048 −0.144 0.023 −0.331

(1.597) (1.631) (1.588) (1.580) (1.596)Sent: Other Moral 1.240∗∗∗ 1.579∗∗∗ 1.615∗∗∗ 1.452∗∗∗ 1.500∗∗∗

(0.256) (0.209) (0.212) (0.210) (0.209)Cue: Same 0.219∗∗∗ 0.222∗∗∗ 0.230∗∗∗ 0.222∗∗∗ 0.227∗∗∗

(0.063) (0.064) (0.063) (0.064) (0.063)Cue: Diff. −0.026 −0.023 −0.026 −0.032 −0.027

(0.065) (0.065) (0.065) (0.065) (0.065)0|1 0.749∗∗∗ 1.060∗∗∗ 1.114∗∗∗ 0.806∗∗∗ 0.980∗∗∗

(0.215) (0.203) (0.197) (0.194) (0.189)1|2 1.652∗∗∗ 1.970∗∗∗ 2.018∗∗∗ 1.715∗∗∗ 1.882∗∗∗

(0.217) (0.204) (0.199) (0.195) (0.191)2|3 2.474∗∗∗ 2.799∗∗∗ 2.838∗∗∗ 2.540∗∗∗ 2.703∗∗∗

(0.218) (0.206) (0.201) (0.197) (0.193)N 2653 2653 2653 2653 2653∗∗∗p < .01; ∗∗p < .05; ∗p < .1

42

Page 43: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Table 6: Ordered Probit Results: Moral EvaluationVariable Coefficient (Std. Err.)

Equation 1 : y moral3Positive Emotion 2.749∗∗ (0.143)Negative Emotion 1.811∗∗ (0.167)Rater: Age 0.078∗ (0.038)Rater: Education 0.005 (0.032)Rater: Female 0.151† (0.078)Rater: Republican 0.075 (0.078)Sentence: Republican -0.210∗∗ (0.058)Sentence: Abortion 0.119∗ (0.060)Sentence: Defense 0.001 (0.065)Sentence: Moral Flag 0.174∗∗ (0.055)Sentence: Polarity -0.768† (0.419)Sentence: Pol-squared 0.005 (2.509)

Equation 2 : cut1Intercept 0.439∗∗ (0.162)

Equation 3 : cut2Intercept 1.055∗∗ (0.163)

Equation 4 : cut3Intercept 1.863∗∗ (0.163)

Equation 5 : cut4Intercept 2.789∗∗ (0.179)

N 1971Log-likelihood -2624.439χ2(12) 435.659

Significance levels : † : 10% ∗ : 5% ∗∗ : 1%

43

Page 44: Chapter 4: Moral Perception: How individuals respond to ... · PDF fileChapter 1: The Building Blocks ... The term is often used as a catch-all ... Moral foundations theory gives us

Table 7: Ordered Probit: Partisan Cues

Emotional Reaction Moral Evaluation

Rep Cue −0.034 −0.096(0.087) (0.085)

Subj: Rep −0.011 0.075(0.052) (0.050)

Dem Cue 0.215∗∗∗ 0.075(0.082) (0.081)

Subj: Age 0.015 0.073∗∗∗

(0.021) (0.021)Subj: Educ 0.007 0.015

(0.017) (0.017)Subj: Female −0.074∗ 0.026

(0.044) (0.043)Sent: Rep −0.059 −0.173∗∗∗

(0.050) (0.049)Sent: Abortion 0.403∗∗∗ 0.274∗∗∗

(0.052) (0.051)Sent: Defense 0.241∗∗∗ 0.171∗∗∗

(0.054) (0.052)Sent: Polarity −0.785∗∗∗ −0.600∗∗

(0.300) (0.294)Sent: Pol-squared −0.045 0.288

(1.587) (1.516)Sent: Moral Content 1.541∗∗∗ 1.706∗∗∗

(0.207) (0.203)RepCue*RepSubj 0.274∗∗ 0.012

(0.135) (0.133)DemCue*RepSubj −0.235∗ −0.337∗∗∗

(0.129) (0.127)0|1 1.019∗∗∗

(0.189)1|2 1.920∗∗∗ 1.089∗∗∗

(0.190) (0.185)2|3 2.739∗∗∗ 1.618∗∗∗

(0.193) (0.186)3|4 2.272∗∗∗

(0.187)4|5 2.927∗∗∗

(0.189)N 2653 2653∗∗∗p < .01; ∗∗p < .05; ∗p < .1

44