content analysis: reliability

16
Content Analysis: Reliability Kimberly A. Neuendorf, Ph.D. Cleveland State University Fall 2011

Upload: mika

Post on 11-Jan-2016

43 views

Category:

Documents


0 download

DESCRIPTION

Content Analysis: Reliability. Kimberly A. Neuendorf, Ph.D. Cleveland State University Fall 2011. Reliability. Generally—the extent to which a measuring procedure yields the same results on repeated trials (Carmines & Zeller, 1979) Types: Test-retest: Same people, different times. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Content Analysis: Reliability

Content Analysis:Reliability

Kimberly A. Neuendorf, Ph.D.Cleveland State University

Fall 2011

Page 2: Content Analysis: Reliability

Reliability

Generally—the extent to which a measuring procedure yields the same results on repeated trials (Carmines & Zeller, 1979)

Types: Test-retest: Same people, different times.

Intracoder reliability. . . Alternative-forms: Different people, same time, different

measures. Internal consistency: Multiple measures, same construct. Inter-rater/Intercoder: Different people, same measures.

Page 3: Content Analysis: Reliability

Index/Scale Construction Similar to survey or experimental work e.g., Bond analysis—Harm to female,

sexual activity Need to check internal consistency

reliability (e.g., Cronbach’s alpha)

Page 4: Content Analysis: Reliability

Intercoder Reliability Defined: The level of agreement or

correspondence on a measured variable among two or more coders

What contributes to good reliability? careful unitizing, codebook construction, coder training (training, training!)

Page 5: Content Analysis: Reliability

Reliability Subsamples Pilot and Final reliability subsamples

Because of drift, fatigue, experience Selection of subsamples

Random, representative subsample “Rich Range” subsample

Useful for “rare event” measures Reliability/variance relationship

Page 6: Content Analysis: Reliability

Intercoder Reliability Statistics - 1

Types Agreement

Percent agreement Holsti’s

Agreement beyond chance Scott’s pi Cohen’s kappa

Fleiss’ multi-coder extension of kappa Krippendorff’s alpha(s)

Covariation Spearman rho Pearson r Lin’s concordance correlation coefficient (rc)

Page 7: Content Analysis: Reliability

Reliability Statistics – 2 See handouts on (a) Bivariate Correlation

and (b) Pearson’s and Lin’s Compared

Page 8: Content Analysis: Reliability

Reliability Statistics - 3 Core assumptions of coefficients

“More scholarship is needed”—these coefficients have not been assessed!

Page 9: Content Analysis: Reliability

Reliability Statistics - 4

My recommendations Do NOT use percent agreement ALONE Nominal/Ordinal: Kappa (Cohen’s, Fleiss’) Interval/Ratio: Lin’s concordance Calculate via PRAM

Reliability analyses as diagnostics, e.g., Problematic variables, coders (“rogues”?), variable/coder

interactions Confusion matrixes (categories that tend to be confused)

Page 10: Content Analysis: Reliability

Reliability Statistics - 5

“Standards” for Minimums for Rel. Stats. Percent Agreement:

90%?? Kappa (Cohen’s, Fleiss’):

.40 minimally, .60 OK, .80 good Pearson correlation; Lin’s concordance:

.70 (~50% shared variance) --???

Page 11: Content Analysis: Reliability

Reliability Statistics - 6 The problem of the “extreme” or “skewed”

distribution Can have a % agreement of .95 and a Cohen’s

kappa of -.10!!! Why? What to do?

Page 12: Content Analysis: Reliability

PRAM: Program for Reliability Analysis with Multiple Coders Written by rocket scientists! Trial version available from Dr. N!

Page 13: Content Analysis: Reliability
Page 14: Content Analysis: Reliability
Page 15: Content Analysis: Reliability
Page 16: Content Analysis: Reliability

pause