study of the day misattribution of arousal (dutton & aron, 1974)

Post on 01-Jan-2016

220 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Study of the day

Misattribution of arousal (Dutton & Aron, 1974)

Can situations influence how much we like someone?

Answer: Yes!

Dutton & Aron’s bridge study

Misattribution of arousal

Misattribution of arousal

Misattribution of arousal

Writing good survey questions

Learning objectives

You will learn:What makes a quality instrumentHow to write questionsWhat validity meansHow to assess validityWhat reliability meansHow to assess reliabilityScale development

Quality instrument*

Questions are determined by objectivesResist the temptation to ask questions that

are interesting but not relevant to your hypothesis

Concrete and clearly phrasedPilot with potential respondentsContain reverse-scored items

Learning objectives

What makes a quality instrumentHow to write questionsWhat validity meansHow to assess validityWhat reliability meansHow to assess reliabilityScale development

Good Questions*

1. Are not double-barreled (asks only about one item per question)

2. Do not have double negatives3. Are not phrased in a leading manner4. Are not biased in their language or make

participants feel uncomfortable5. Are concrete and specific

Open-endedAdvantages

• Gives you the respondents view without interference• Good when unsure what type of answers to expect

Difficulties• Time-consuming for respondent• Answers must be cataloged and interpreted

Two types of questions

Mussweiler, T., & Damisch, L. (2008). Going back to Donald: How comparisons shape judgmental priming effects. Journal of Personality and Social Psychology, 95(6), 1295.

Two types of questionsClosed-ended

Advantages• Easier to interpret• More conducive to statistical analyses• Good for large samples• Answers are usually more reliable and consistent

Difficulties• Researcher must be familiar enough with phenomenon

to write questions• May not accurately capture all of respondents opinions• Less illustrative

Response Choices

1. Nominal: categorical, discrete, mutually exclusive, no inherent order

2. Ordinal: inherent order but not associated with numbers– Strongly agree, agree, neither agree nor disagree,

disagree, strongly disagree– Excellent, very good, good, fair, poor

3. Continuous: numbers meaningful

Ordinal Measures*

Include a “do not know” if appropriateInclude a neutral response if appropriateBalance all responsesUse a 5- or 7-point numbered scale

Don’t forget demographics

Important for:1. Describing sample (especially gender and race)2. Exploring your findings

Typically asked at endCan include gender, race/ethnicity, education,

job, age, marital status, geographic place of residence, etc.

Learning objectives

What makes a quality instrumentHow to write questionsWhat validity meansHow to assess validityWhat reliability meansHow to assess reliabilityScale development

Validity

Assesses the construct of interestMeasuring what it is supposed to Cannot be directly measured – must

be inferred from evidence

Face Validity

Appears to measure what it is supposed to

Let’s say you want to assess how social someone is

Which ones would be best?– How often do you socialize with friends?– How many parties did you attend last month?– Do you get angry easily?– How much do you enjoy meeting new people?– Do you enjoy playing sports?

Construct Validity

Does it relate to other constructs as would be predicted?

Assess by correlating it with other measures

E.g. How social someone is should relate to how much they have conversations with other people

-& maybe how healthy they are

Criterion Validity

How well does the measure predict or estimate real-world outcomes?

E.g., If you asked students how interested they were in majoring in computer science, the measure can be validated by looking at how it relates to who actually majors in computer science

Learning objectives

What makes a quality instrumentHow to write questionsWhat validity meansHow to assess validityWhat reliability meansHow to assess reliabilityScale development

Reliability

Consistency in the type of result a test yields E.g., a reliable person

Not perfectly overlapping, but questions “hang together”

Measuring Reliability*

Cronbach Alpha: Inter-item consistencyTest – retest : Same measure twice to the same

groupEquivalent forms: Administer two forms of the

test to the same groupSplit-half : Administer half the measures at a

time (odd items vs. even items)Inter-rater Reliability (coding qualitative data):

How much coders’ answers overlap (percent agree and kappa)

How to improve reliability?

Use clear and concise measuresInclude more items – less distorted by chance

factorsDeveloping a scoring plan for qualitative

coding

Learning objectives

What makes a quality instrumentHow to write questionsWhat validity meansHow to assess validityWhat reliability meansHow to assess reliabilityScale development

Scale development*

1. Conceptualize the target constructWhat are you trying to measure?Lit reviewOpen-ended questions

2. Write the itemsStart with a larger poolChoose an appropriate response format

3. Pilot the scaleAnalyze reliability (kappa) and validity (correlations with other

measures)Factor analysis to see if there are subscalesDecide which need to be eliminated

4. Pilot the new measures

Available scales*

The easiest thing (and the thing looked upon favorably by reviewers) is to use other people’s scales

You can find these in:

Scale databases

e.g. http://www.muhlenberg.edu/depts/psychology/Measures.html

Google

Journal articles

Books of scales

Other Suggestions

For Qualtrics•Include a question that asks participants NOT to respond– Helps rule out random clicking

In general:•Put DVs as close to manipulation as possible

Congratulations!

You have learned:Characteristics of good scalesHow to write good questionsWhat validity means and how to assess itWhat reliability means and how to assess itThe steps involved in developing a scale

top related