research methods contentarea researchable questions researchdesign measurementmethods sampling...

28
Research Methods Content Content Area Area Researchable Questions Research Research Design Design Measuremen Measuremen t t Methods Methods Sampling Sampling Data Data Collection Collection Statistical Statistical Analysis Analysis Report Report Writing Writing ? ?

Upload: agatha-goodwin

Post on 03-Jan-2016

222 views

Category:

Documents


2 download

TRANSCRIPT

Research MethodsContentContentAreaArea

ResearchableQuestions

ResearchResearchDesignDesign

MeasurementMeasurementMethodsMethods

SamplingSampling DataDataCollectionCollection

StatisticalStatistical AnalysisAnalysis

ReportReportWritingWriting??

Assessment of Observation(Measurement)

Observed Score = True Score + Error

AAaahh!!A shark!

Error component may be either:

• Random Error = Varaiation due to unknown or uncontrolled factors

• Systematic Error = variation due to systematic but irrelevant elements of the design

• Concern of scientific research is management of the error component

• Number of criteria by which to evaluate success

1 .Reliability

• Does the measure consistently reflect changes in what it purports to measure?

• Consistency or stability of data across• Time• Circumstances

• Balance between consistency and sensitivity of measure

2 .Validity

• Does the measure actually represent what it purports to measure?

• Accuracy of the data (for what?)• Number of different types:

A. Internal Validity

Semmelweis Pasteur

Lister

Semmelweis Pasteur

Lister

• Effects of an experiment are due solely to the experimental conditions

• Extent to which causal conclusions can be drawn

• Dependent upon experimental control• Trade-off between high internal

validity and generalizability of results

B. External Validity• Can the results of an experiment be

applied to other individuals or situations?

• Extent to which results can be generalized to broader populations or settings

• Dependent upon sampling subjects and occasions

• Trade-off between high generalizability and internal validity

C. Construct Validity

• Whether or not an abstract, hypothetical concept exists as postulated

• Examples of Constructs:• Intelligence• Personality• Conscience

Based on:

• Convergence = different measures that purport to measure the same construct should be highly correlated (similar) with one another

• Divergence = tests measuring one construct should not be highly correlated (similar) to tests purporting to measure other constructs

D. Statistical Conclusion Validity

• The extent to which a study has used appropriate design and statistical methods to enable it to detect the effects that are present

• The accuracy of conclusions about covariation made on the basis of statistical evidence

•Based on appropriate:

•Statistical Power•Methodological Design•Statistical Analyses

Can have a reliable, but invalidCan have a reliable, but invalidmeasuremeasure..

If measure is valid, thenIf measure is valid, thennecessarily reliablenecessarily reliable..

3 .Utility

• Usefulness of methods gauged in terms of:

A. EfficiencyB. Generality

A. Efficient Methods provide:

• Precise, reliable data with relatively low costs in:

• time• materials• equipment• personnel

B. Generality

• Refers to the extent to which a method can be applied successfully to a wide range of phenomena

• a.k.a. Generalizability

Threats to Validity

• Numerous ways vailidity can be threatend

• Related to Design

• Related to Experimenter

Related to Design

1. Threats to Internal Validity (Cook & Campbell, 1979)

A. History = specific events occurring to individual subjectB. Testing = repeated exposure to testing instrumentC. Instrumentation=changes in the scoring procedure over time

D. Regression = reversion of scores toward the mean or toward less extreme scoresE. Mortatility = differential attrition across groupsF. Maturation = developmental processesG. Selection = differential composition of subjects among samples

H. Selection by Maturation interaction

I. Ambiguity about casual direction

J. Diffusion of Treatments = information spread between groups

K. Compensatory Equalization of Treatments = lack of treatment integrity

L. Compensatory Rivalry = “John Henry” effect on nonparticpants

2. Threats to External Validity (LeCompte & Goetz, 1982)

A. Selection = results sample-specific

B. Setting = results context-specific

C. History = unique experiences of sample limit generalizability

D. Construct efffects = constricts are sample specific

Related to Experimenter

1. Noninteractional ArtifactsA. Observer Bias = over/under estimation of phenomenon (schema)

B. Interpreter Bias = error in interpretation of data

C. Intentional Bias = fabrication or fraudulent interpretation of data

2. Interactional ArtifactsA. Biosocial Effects = errors attributable to biosocial attributes of researcher

B. Psychosocial Effects = errors attributable to psychosocial attributes of researcher

C. Situational Effects = errors attributable to research setting and participants

D. Modeling Effects = errors attributable to example set by researcher

E. Experimenter Expectancy Bias = researchers treatment of participants elicits confirmatory evidence of hypothesis

BasicAppliedPurpose

Context

Methods

•Expand Knowledge

•Academic Setting•Single Researcher•Less Time/Cost Pressure

•Internal Validity•Cause•Single Level of Analysis•Single Method•Experimental•Direct Observations

•Understand Specific Problem

•Real-World Setting•Multiple Researchers•More Time/Cost Pressure

•External Validity•Effect•Multiple Levels of Analysis•Multiple Methods•Quasi-Experimental•Indirect Observations

Only substantial difference between applied and basic research:

• Basic = Experimental Control

• Applied = Statistical Control