organizational research methods week 2: causality

Post on 14-Dec-2015

216 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Organizational Research MethodsWeek 2: Causality

What Do We Mean By Causality?• Relationship between two events where one

is a consequence of the other• Determinism: A (cause) leads to B (effect)• “In the strict formulation of the law of

causality—if we know the present, we can calculate the future—it is not the conclusion that is wrong but the premise”.On an implication of the uncertainty principle. Werner Heisenberg

Heisenberg & Uncertainty Principle• Certain properties of subatomic particles are

linked so the more accurately you know one, the less accurately you know the other– We can compute probabilities not certainties– Argues against determinism

• “Physics should only describe the correlation of observations; there is no real world with causality”

Heisenberg, 1927, Zeitschrift für Physik

– Psychology, like quantum physics, is probabalistic

Cause Versus Effect

• Effect of a Cause (Description)– What follows a cause?

• Cause of an Effect (Explanation)– Why did the effect happen?

• Do bacteria “cause” disease?– Actually toxins cause disease– Actually certain chemical reactions are cause

Holland, P. W. (1988). Causal inference, path analysis, and recursive structural equations models. Sociological Methodology, 18, 449-484.

Three Elements of Causal Case

• Cause and effect are related• Cause preceded effect• No plausible alternative explanations

• John Stuart Mill

Experiment• Vary something to discover effects

– Shows association– Shows time sequence– Can rule out only some alternatives

• Confounds• Boundary conditions (generalizability)

• Good for causal description not explanation

Encouragement Design

• Manipulation of instructions/messages• Subjects “encouraged” to do certain things• Subjects self-select level of condition

Holland, P. W. (1988). Causal inference, path analysis, and recursive structural equations models. Sociological Methodology, 18, 449-484.

Studying and Performance• Students randomly assigned to study amount• Test scores as DV• Did studying lead to test results?

– Encouragement led to test results– Impact on studying unclear– Effect of studying unclear

• What was cause of test results?

Holland, P. W. (1988). Causal inference, path analysis, and recursive structural equations models. Sociological Methodology, 18, 449-484.

Nonexperimental Research Strategy

• Determine covariation• Test for time sequence

– Daily diary– Longitudinal design– Quasi-experiment

• Rule out plausible alternatives– Based on data– Logical

Inus Condition

• Multiple causes/mechanisms for a phenomenon– The same thing can occur for different reasons– Sufficient but unnecessary conditions– Multiple motives– People do things for different reasons

• Are social phenomena a hierarchy of inus conditions?

• Should we expect strong relationships between variables if inus conditions exist?

Inus Conditions For Turnover

• Better job offer• Bullying• Disability• Dissatisfaction• Poor skill-job match• Pursue other interests (e.g., Olympics)• Spouse transfer• Strategic reasons: Part of plan

Confirmation/Falsification

• Observation used to– Confirm/support theories– Falsify/disconfirm theories

• Confirmation: All swans are white– Must observe all swans in existence

• Falsification: One black swan– Easier to falsify than confirm– Null hypothesis testing disconfirmation– Based on construct validity

• Poor measure might falsely falsify

Scientific Skepticism• Science not completely based on objective reality• Observations based on theory of construct• Construct validity is theoretical interpretation of

what numbers represent• Theories could be wrong: Biased measurement

– NA as bias (Watson, Pennebaker, Folger)– Social constructions (Salancik & Pfeffer)

• Science based on trust of methods: Faith– Experiments– SEM– Statistical control (Meehl)

Shadish et al. Skepticisms (p. 30)

• “…scientists tend to notice evidence that confirms their preferred hypotheses and to overlook contradictory evidence.”

• “They make routine cognitive errors of judgment”• The react to peer pressures to agree with accepted

dogma”• They are partly motivated by sociological and

economic rewards”

Statistical Control

• Including controls common practice• Atinc & Simmering review

– Micro-studies 3.7 controls/study– Macro-studies 7.7 controls/study

• Purification principle– Assumption that controls more accurate– Implicit causal conclusions

Bias Versus Spuriousness

• Third variable that affects observations• Bias: Affect on measurement• Spuriousness: Affect on construct

x y

C

X Y

x y

C

X Y

Bias Spuriousness

No Bias or Spuriousness

• Remove effects of interest before testing• Baby with the bathwater

Performance Dimension 1

Performance Dimension 2

Liking

Performance Dimension 1

Performance Dimension 2

Meehl 1971: High School Yearbooks• What is Meehl’s issue with Schwarz and

common practice?• Schwarz approach

– Control SES because it relates to schizophrenia and social participation

– Does not consider plausible alternatives• Schwarz accepts without skepticism

(Shadish)• Meehl no Automatic Inference Machine

Alternative Mechanisms: NA

• Negative Affectivity (NA)– Routinely controlled as a bias– Based on observed correlations– Watson, Pennebaker, Folger

• Ignores feasible alternative mechanisms– Spector, Zapf, Chen, Frese 2000

Mechanism Description

Perception See world/job in a negative or positive way

Hyper-responsivity Over or under-react emotionally

Selection Personality affects selection into better vs. worse jobs

Situation creation People create good vs. bad situations for themselves

Mood Mood affects all measures in a study including job satisfaction

Reverse Causality Experience affects personality

Automatic Inference Machine• Idea that statistics can provide tests for causation• There is no such thing as a “causal test”• There is no such thing as a “test” for mediation• Statistical controls do not provide the “true”

relationship between variables• Statistics are only numbers: They don’t know

where they came from• Inference is in the design• Inference is in the mind: Logical reasoning

Commonest Methodological Vice• Meehl 1971• Assuming certain variables are fixed and

therefore must be causal– SES– Demographics– Personality

• But these variables can’t be effects.• Can they?

Can Job Satisfaction Cause Gender?

• Correlation of Gender and satisfaction = group mean differences

• Satisfaction can’t cause someone’s gender• Satisfaction can be the cause of gender

distribution of a sample• Suppose Females have higher satisfaction

than Males• Multiple reasons

Alternative Gender-Job Satisfaction Model

• Females more likely to quit dissatisfying jobs

• Dissatisfaction causes gender distribution• Gender moderates relation of satisfaction

with quitting

Sat

isfa

ctio

n

MalesFemales

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

….

.

.

.

More Alternatives

1. Women less likely to take dissatisfying job (better job decisions)

2. Women less likely to be hired into dissatisfying jobs (protected)

3. Women less likely to be bullied/mistreated4. Women given more realistic previews

(lower expectations)5. Women more socially skilled at getting

what they want at work

How To Use Controls

• Controls great devices to test hypotheses/theory• Rule in/out plausible alternatives• Best based on theory• Sequence of tests• No automatic/blind use/inference• Tests with controls not more conclusive and

often less

Control Strategy

1. Test that A and B are related• Salary relates to job satisfaction

2. Confirm/disconfirm control variable• Gender relates to both

3. Generate/test alternative explanations for control variable• Differential expectations• Differential hiring rate• Differential job experience• Differential turnover rate

Mediation

• Explanatory variable• X M Y• Routinely tested in cross-sectional designs• Mediation is a causal conclusion

Stone-Romero & Rosopa

• Advocates experimental designs– Overstates advantages– Ignores limitations

• Confounds• Construct invalidity• Experimenter effects• External validity

Experimental Approach To Mediation

• Basic logic– Show X M– Show M Y– Therefore X M Y

• Limitations– How do you manipulate M without X?– How can you be sure that X works through M?

• Inus conditions

X

Z M Y

Example p. 340

• Study 1: Goals to Norms• Study 2: Norms to Performance• When goals lead to performance was it

through norms?

Demonstrate Mediation• Show the chain of events

– Multiple measurements– Show X leads to M leads to Y

• What does it take to build a mediation case?– Difficult to demonstrate causality among 3

variables– Show relationship– 3-way time sequence– Rule out alternatives

Medical Perspective

• Three types of variables– Correlate: Variable shown to correlate.– Proxy risk factor: Predictor that precedes but

has no demonstrable effect on outcome.– Causal risk factor: Precedes and affects

outcome when manipulated or changed.

Kraemer, H. C., Stice, E., Kazdin, A., Offord, D., & Kupfer, D. (2001). How do risk factors work together? Mediators, moderators, and independent, overlapping, and proxy risk factors. American Journal of Psychiatry, 158, 848-856.

Three-step Strategy

• Step 1: Show relationship exists• Step 2: Show prediction over time• Step 3: Manipulate X and show effects on Y

Week 3: Validity, Threats And Qualitative Methods

• Validity– Interpretation of constructs/results– Inference based on purpose

• Hypothesized causal connections among constructs• Nature of constructs• Population of interest

– People– Settings

• Not a property of designs or measures themselves

Four Types of Design Validity

• Statistical conclusion validity– Appropriate statistical method to make desired

inference• Internal validity

– Causal conclusions reasonable based on design• Construct validity

– Interpretation of measures• External validity

– Generalizeability to population of interest

Threats to Validity

• Statistical Conclusion Validity– Statistics used incorrectly– Low power– Poor measurement

• Internal Validity

Eight Threats From Shaddish

1. Ambiguous Temporal Precedence

2. Selection: Differences in groups

3. History: Events occurring between testings

4. Maturation: Natural change in subjects over time

5. Regression To the Mean

6. Attrition: Especially differential

7. Testing: Affects of repeated testings

8. Instrumention: Change over time or conditions

Bolded: Most problematic for organizational research

Threats To Validity 2• Construct Validity

– Inadequate specification of theoretical construct– Unreliable measurement– Biases– Poor content validity

• External Validity– Inadequate specification of population– Poor sampling of population

• Subjects• Settings

Points From Shadish et. al• Evaluation of validity based on subjective judgment• Scientists conservative about accepting

results/conclusions that run counter to belief• Scientists market their ideas; try to convince

colleagues• Design controls preferable to statistical• Statistical controls based on assumptions

– Untrue– Untested– Untestable

Points From Shadish et al. 2• People in same workplace more similar than

people across workplaces– When might this be true?

• Multiple tests– Probability compounding assumes independent

tests

• Molar vs. molecular– First determine molar effect– Then breakdown to determine molecular elements

Qualitative Methods• What are qualitative methods

– Collection/analysis of written/spoken text– Direct observation of behavior

• Participant observation• Ethnography• Case study• Interview• Written materials

– Existing documents– Open-ended questions

Purpose – Erickson 2012• Describes social action

– What people do in light of their attributions and meanings

• Interpretive– “describing what people in a local setting do in

terms that make contact with the meaning perspectives within their actions make sense, from their points of view”, p. 686

• Understand cause in local situation

Erickson, F. (2012). Comments on causality in qualitative inquiry. Qualitative Inquiry, 18, 686-688.

Qualitative Research Approach• Accept subjectivity of science

– Is this an excuse?

• Less driven by hypothesis• Assumption that reality a social construction

– If no one knows I’ve been shot, am I really dead?

• Interested in subject’s viewpoint• More open-ended• More interested in context• Less interested in general principles• Focus more on interpretation than quantification

Analysis• Content Analysis

– Interviews– Written materials– Open-ended questions– Audio or video recordings– Quantifying

• Counts of behaviors/events• Categorization of incidents• Multiple raters with high agreement

• Nonquantitative– Analysis of case– Narrative description

• Ethnography

Qualitative Organizational Research: Job Stress

• Quantitative survey dominates• Role ambiguity and conflict dominated in

1980s & 1990s (Katz & Kahn)• Dominated by Rizzo et al. weak scales• Studies linked RA & RC to potential

consequences and moderators• Qualitative approach (Keenan & Newton,

1985)• Stressful incidents

Keenan & Newton’s SIR

• Stress Incident Record– Describe event in prior 2 weeks– Aroused negative emotion

• Top stressful events for engineers– Time/effort wasted– Interpersonal conflict– Work underload– Work overload– Conditions of employment

• RA & RC rare (1.2% and 4.3%)

Subsequent SIR Research• Comparison of occupations

– Clerical: Work overload, lack of control– Faculty: Interpersonal conflict, time wasters– Sales clerks: Interpersonal conflict, time

wasters– Narayanan, L., Menon, S., & Spector, P. E. (1999). Stress in the workplace: A

comparison of gender and occupations. Journal of Organizational Behavior, 20, 63-73.

• Informed subsequent quantitative studies– Focus on more common stressors

• Interpersonal conflict• Organizational constraints

– Forget RA & RC

Cross-Cultural SIR Research• Comparison of university support staff• India vs. U.S.

Stressor India US

Overload 0% 25.6%

Lack of control 0% 22.6%

Lack of structure 26.5% 0%

Constraints (Equipment) 15.4% 0%

Conflict 16.5% 12.3%

Value of Qualitative Approach

• Richer more complete picture– Doesn’t reduce complex to few variables

• More information on context• Doesn’t constrain subjects

– Open ended responses

• Raw material for hypothesis and theory• Can be quantified• Can be used to test hypotheses

Limitations of Qualitative Approach

• If not quantified– Subjective: One person’s perspective– Not science without systematic methods that

are replicable

• Quality of material content analyzed• Potential biases

– Fundamental attribution error: Attributing others’ behavior to stable traits.

– Attribute success internally, failure externally

Research As Craft• Scholarly research as expertise not bag of

tricks• Logical case• Go beyond sheer technique

– Research not just formulaic/trends– Not just using right design, measures, stats– Can’t go wrong with Big Five, SEM, Meta-

analysis– SEM on meta-analytic correlation matrix of Big

Five

Meta-Analytic Correlation Matrix Analysis

• Do meta-analysis compiling correlations• Conduct Regression or SEM on mean

correlations• Problems:

– Mean correlations based on different sets of samples

– Often (usually) moderators affect correlations across studies

– Populations differ between correlations– Results not meaningful

Meta-Analytic Correlation Analyses II

• Regression– Regression of Y on X1 and X2– Want to conclude that X2 incremental over X1– Can only conclude that X2 in one population

incremental over X1 in another– Example:

• Job Satisfaction = Conscientiousness and Autonomy• Conclusion: Conscientiousness in police & nurses

predicts better than Autonomy in military &teachers

Meta-Analytic Correlation Analyses

• Conditions To Use– All sample representative from same population– All samples have all variables

Developing the Craft• Experience• Trying different things

– Constructs– Designs/methods– Problems– Statistics

• Reading• Reviewing• Teaching• Thinking/discussing• Courses necessary but not sufficient• Lifelong learning—you are never done

Developing the Craft 2• Field values novelty and rigor• Don’t be afraid of exploratory research

– Not much contribution if answer known in advance

• Look for surprises• Don’t be afraid to follow intuition• Ask interesting question without a clear answer• Focus on interesting variables• Good papers tell stories

– Variables are characters– Relationships among variables

Week 4Construct & External Validity

andMethod Variance

Constructs

• Theoretical level– Conceptual definitions of variables– Basic building blocks of theories

• Measurement level– Operationalizations– Based on theory of construct

What We Do With Constructs

• Define• Operationalize/Measure• Establish relations with other constructs

– Covariation– Causation

Construct Validity

• Case based on weight of evidence• Theory of the construct

– What is its nature?– What does it relate to?

• Strength based on– Adequate definition– Adequate operationalization– Control for confounding

Steps To Building the Case1. Define Construct

2. Operationalize construct: Scale development

3. Construct theory: What it relates to

4. Validation evidence1. Correlation with other variables

1. Cross-sectional

2. Predictive

2. Known groups

3. Convergent validity

4. Discriminant validity

5. Factorial validity

Points By Shadish et al.• Construct confounding

– Assessment of unintended constructs– SD and NA– Race and income– Height and gender

• O = T + E + Bias– Bias = Extra unintended stuff

Points by Shadish 2• Mono-operation bias

– Not clear on what it is• Advocates converging operations• Multiple operationalizations• What is a different operationalization?

– Different item formats– Different raters– Different experimenters– Different training programs

Points by Shadish 3• Compensatory equalization: Extra to

control group• Compensatory de-equalization: Extra to

experimental group• FMHI Study

– Random assignment to FMHI vs. state hospital– Staff violated

Construct Validity: Example of Weak Link

• Deviance: Violation of norms• Theoretical construct weakness

– Whose norms?• Society, organization, workgroup

• Operationalization weakness– List of behaviors with no reference to norms– Norms assumed from behavior

• Retaliation: Response to unfairness– Asks behaviors plus motive– Retaliation mentioned in instructions

External Validity: Population

• Link between sample and theoretical population

• Define theoretical population• Identify critical characteristics• Compare sample to population

– Employed individuals– Do students qualify?

External Validity: Setting

• Link between current setting and other settings– Organization– Occupation

• Identify critical characteristics of settings• Compare setting to others

– Lab to field

External Validity: Treatment/IV

• Is IV a reasonable analog?• Encouragement design idea• Link between current treatment/IV and

others• Will treatment in study work in nonresearch

setting?

External Validity: Outcome/DV

• Link between current outcome/DV and others• Will results in study work similarly in

nonresearch condition?• Will different operationalizations of outcome

have same result?– Supervisor rating of performance vs. objective– Safety behavior versus accidents/injuries

Facts Are the Enemy of TruthWhen Facts Oppose Belief

• Gender bias in medical studies (Shadish p. 87)• Women are neglected in medical research• Treatments not tested on women• New grant rules require women• Study of 724 studies (Meinart et al.)

– 55.2% both genders– 12.2% males only– 11.2% females only– 21.4% not specified– 355,000 males, 550,000 females

When Politics Attack Science

• Evolution• IQ and performance• Differential validity of IQ tests• Others?

Method Variance

• Method Variance: Variance attributable to the method itself rather than trait

• Campbell & Fiske 1959• Assumed to be ubiquitous

• VTotal = VTrait + VError + VMethod

Campbell & Fiske, 1959

• “…features characteristic of the method being employed, features which could also be present in efforts to measure other quite different traits.”

Common Method Variance

• CMV– Mono-method bias– Same source bias

• When method component shared• VTotal = VTrait + VError + VMethod

• Assumes same method has same Vmethod

• Assumed to inflate correlations• Only raised with self-reports

Evidence Vs. Truth• Truth: CMV = Everything with same method correlated• Evidence: Boswell et al. JAP 2004, n = 1601

Leverage seeking

Separation seeking

-.04

Career satisfaction

-.10 -.14

Perceived alternatives

.07 -.09 .20

Reward importance

.16 .05 -.03 .02

Potential Universal Biases• Truth: Specific biases widespread• Evidence

– Social Desirability Meta-analysis• Moorman & Podsakoff• Mean r = .05• Highest -.17 role ambiguity; .17 job satisfaction• Lowest .01 Performance• Lower with employees (.03) than students (.09)

– Social Desirability Control Study• Role clarity-job satisfaction

– r = .46 to .45 (when SD controlled)

Single-Source Vs. Multi-Source• Truth: Multisource correlations smaller• Evidence: Crampton & Wagner Meta-analysis

– Compared single-source vs. multi-source– 26.6% single-source higher (job sat and perform)– 62.2% no difference (job sat and turnover)– 11.2% multi-source higher (job sat and absence)

Conclusions About CMV

• CMV is a myth– Variance does not reside in the method– No constant inflation of correlations

• Oversimplification of complex situation• If it existed, it would be easy to solve• Techniques to fix the problem ineffective

and counterproductive– Enhance publishability

Cynical Use of CMV Tests

• Lindell & Whitney Test• Marker variable expected to be unrelated• Partial out effect of marker to yield

relationship free of CMV• Strawman test of something that isn’t there• Great for demonstrating to reviewer you

had no CMV problem

Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology, 86, 114-121.

If Not CMV

• CMV myth but bias is not• Bias: Unintended influences on measure• Bias a function of construct plus method• Unshared bias: Correlation attenuation• Shared bias: Correlation inflation• No automatic inference machine to

eliminate all biases• Some known, many likely unknown

Solutions• Eliminate term “method variance”• Bias or 3rd variables• Construct validity and potential 3rd variables• Interpret results cautiously• Choose methods to control feasible 3rd variables• Alternate sources

– Not always accurate• Converging operations• Use strategy of first establishing correlation

– Rule out 3rd variables in series of steps

Potential Sources of Bias

• Difficult to distinguish bias from substantive effect

• Personality– Social desirability– NA

• Culture– Response styles– Asians avoid strong positives

• Mood

Potential Sources of Bias II

• Cognitive Processes– Schema– Attribution errors

• Fundamental attribution error

– Halo and Leniency

• Priming– Effects of instructions or prior questions

• Item overlap

Item Overlap

• Similar content in scales of different constructs

• Stress– Confound of stressors and strains– Stressor: How often do you feel stressed by …– Strain: How often do you feel anxious

CWB-OCB Biases

• Dalal’s three artifacts• Antithetical items (overlap)• Agreement vs. frequency

– Agreement more subject to halo

• Source: Self versus supervisor– Supervisor more subject to halo

Dalal, R. S. (2005). A Meta-analysis of the relationship between organizational citizenship behavior and counterproductive work behavior. Journal of Applied Psychology, 90, 1241-1255.

CWB – OCB Overlap

OCB CWB

Does not take extra breaks

Takes undeserved breaks

Taken a longer break than you were allowed to take

Obeys company rules and regulations

Purposely failed to follow instructions

Consumes a lot of time complaining about trivial matters

Complained about insignificant things at work

Conducts personal business on company time

I used working time for private affairs

Dalal’s Meta-Analysis

• Mean correlations• Antitheticals: yes vs. no: -.54 vs.

-.16• Agreement vs. frequency: -.55 vs. -.23• Supervisor vs. self: -.60 vs.

-.12• Spector et al. isolated effects:

– Order Antithetical, format, source

Spector, P. E., Bauer, J. A., & Fox, S. (2010). Measurement artifacts in the assessment of counterproductive work behavior and organizational citizenship behavior: Do we know what we think we know? Journal of Applied Psychology, 95, 781-790.

Conway & Lance 2010• Distinguished method variance from effects p. 325

• Note: Different methods can share bias, p. 327

– Survey versus interview and SD• Different sources measure different things, p. 328

• Did Heisenberg really say we change things by measuring them?, p. 329

• Judge et al. were clear that self-report of job characteristics were perceptions and not objective, p. 329

Conway & Lance 2010 cont.• Construct validity evidence does not rule

out possible biases, p. 329

• Be concerned about item overlap, p. 330

Week 5: Quasi-Experimental Design

• Design: Structure of an investigation– Number of groups– Assignment of subjects– Sequence of conditions– Sequence of assessments

• Experimental vs. Nonexperimental

Experimental Design

• What is an experiment?– Random assignment– Creation of Conditions?– Naturally occurring experiment

• Simple to Complex– Number of independent variables– Number of dependent variables– Sequence of DV assessment

Two-Group Experiment

Treatment Group: XT O

Control Group: XC O

• Comparison of two conditions• Hold everything constant that is possible• Many ways to design the control group• Report effect size• Power

Two-Group Design: Analysis

• Independent group t-test• One-way ANOVA (2 groups)• t2 (n1 + n2 – 2) = F (1, n1 + n2 -2)

• r (with IV dummy coded)• r = f(t)

Data Structure 2 group t-test

Treatment Control

1 2

2 3

2 4

3 5

3 5

Data Structure IV by DV Correlation

IV DV

1 1

1 2

1 2

1 3

1 3

2 2

2 3

2 4

2 5

2 5

t vs. r

• Entered data set into SAS• Independent group t-test

– t = 2.31, p = .0497

• Correlation– r = .63246– t-test for significance of r = 2.3094, p = .0497

Variance Partitioning• Total: All variance among all subjects• Between: Variance between groups

– Variation in means

• Within: Variance within groups– Nested across groups

• Total = Between + Within• Significance: Ratio of B/W

– Is there more B variance than expected by chance?

One IV: Multiple Groups

• One IV• More than two groups• Categorical IV

– Different treatments

• Continuous IV– Different levels of treatment– Sessions of training

• ANOVA

Treatment Group Independence

• Between Subject• Within subject

– Repeated measures

• Nested or hierarchical– Subjects in teams in conditions

• Affects analyses– ANOVA– Multilevel modeling for nested

Between Group

• Advantages– Independence of conditions– No carry over effects– Conceptually simple

• Disadvantages– Sample size requirement– Limited to one level, usually people

Within Group

• Advantages– Greater power by controlling error within

people– Efficiency: One group of subjects– Direct comparison of conditions on same

sample

• Disadvantages– Contamination/carryover effects of conditions

Nested

• People in higher social units– Departments– Classes– Organizations– Teams

Nested

• Advantages– Allows study of multiple levels– People and teams– Allows study of cross-level effects

• Disadvantages– Requires larger samples

• N at both levels: n people per group and n of groups

– Analysis/interpretation more complex

Factorial Designs• Multiple IVs• Two-Way design

– Two IVs totally crossed– Every combination of conditions– Orthogonal: equal sample size– Nonorthogonal: Unequal sample size

• Confounding of IVs• Level of A predicts likely level of B• Interpretation of IV effects difficult

• ANOVA

Main Effects and Interactions

• Main effect– Average effect of levels of one IV ignoring the

other

• Interaction– Joint effect of both IVs – Shows effect of one IV affected by the other

Main Effect

A1 A2

B1

B2

DV

Interaction

A1 A2

B1

B2

DV

Interaction

A1 A2

B1

B2

DV

Interaction

A1 A2

B1

B2

DV

2x2 Design

A1 A2 Mean

B1 A1B1 A2B1 (A1B1+A2B1) / 2

B2 A1B2 A2B2 (A1B2+ A2B2) /2

Mean (A1B1 + A1B2) /2 (A2B1 + A2B2) /2

A Main effect: (A1B1 + A2B1) vs. (A1B2 + A2B2)B Main effect: (A1B1 + A1B2) vs. (A2B1 + A2B2)Interaction: (A1B1 + A2B2) vs. (A2B1 + A1B2)

Larger Designs

• Any number of IVs • Three-way interaction complex to interpret• Four or more way difficult to interpret• Takes large number of subjects• Not always feasible for experiments

Moderator Tests

• Another term for interaction• Term used with continuous variables• Used with multiple regression• Same idea of one variable affecting effects

of another.• Nonparallel lines• Slope = f(moderator)

Mixed Design

• Between, Within, and Nested• Test of CWB-OCB Artifacts• Response format between-S

– Randomly assigned

• Antithetical yes vs. no: Within subject– Everyone got both

• Source: Nested– Subject versus supervisor nested within dyad

Spector et al. 2010

Quasi-experiment

• Design without random assignment• Comparison of conditions

– Trained vs. Not trained

• Researcher created or existing• Can characteristics of people be an IV?

– Gender– Personality

• Is a survey a quasi-experiment?– Question about condition

Settings

• Laboratory vs. field• Laboratory

– Setting in which phenomenon doesn’t naturally occur

• Field– Setting in which phenomenon naturally occurs

• Classroom field for educational psychologist• Classroom lab for us

Lab vs. Field Strengths/Weaknesses

• Lab– High level of control– Easy to do experiments– Limits to what can be studied– Limited external validity of population/setting

• Field– Limited control– Difficult to do experiments– Wide range of what can be studied– High reliance on self-report– High external validity

Design Versus Setting

• Design: Structure of investigation• Setting: Place in which study occurs• Lab ≠ Experiment• Field ≠ Nonexperiment

Applied Versus Basic

• Refers to areas of psychology• Basic: Experimental Areas

– Cognitive, Neuro, Social?, Developmental?

• Applied: Clinical, I/O• Not useful distinction• What is applied?• “Applied psychologists” use same methods

to address the same questions.

Lab in I/O Research

• What’s the role of lab in I/O research?• Stone suggests lab is as generalizeable as

field. Do you agree?• Stone says I/O field biased against lab. Is it?• When should we do lab vs. field studies?

Quasi-Experimental Compromise

• Quasi-experiments– Compromise when true experiment isn’t possible– Built in confounds

• Requires more data than experiment to rule out confounds

• Inference complex• Logic puzzle not cookbook• Can’t just assume IV caused DV

Quasi-Experiment and Control

• Use of design AND statistical controls• “Statistical adjustment only after the best design

controls have been used” Shadish, p. 161• Control through comparison groups• Control through retesting

– Pretest-posttest– Multiple pretests/posttests– Long-term follow-ups– Trends

• Statistical control: 3rd variables & potential confounds

Single Group Designs

• Posttest only

X O• When (if ever) is this useful?• Pretest-posttest

O X O• When is this useful?• What are the limitations?• Program evaluation

Nonequivalent Groups Design

• Preexisting groups assigned treatment vs. control

X O

O• Establishes difference between groups• Limited inference

Nonequivalent Groups Design Limitations

• What are the main limitations?– Groups could have been different initially– Interaction of group characteristics and

treatment– Differential history causing differences

Coping With Preexisting Group Differences1. Assess preexisting differences

• Pretest

2. Assess trends• Multiple pretests and posttests

3. Assign multiple groups• Random assignment of groups if possible

4. Replicate5. Additional control groups6. Matching7. Statistical adjustment of potential confounds8. Switching replications—Give treatment to

control

Switching Replications

Group 1: X O -- O

Group 2: -- O X O

• Group 2 control for Group 1• Replicate effect with Group 2• Can have more groups• Power and replicability

Matching

• Selecting similar participants from each group– Choose one or more matching variables– Assess variables– Choose pairs that are close matches

• Difficult to match on multiple variables– Sample size reduction

• Might bias samples– High in one sample low in another– Meaning of high/low can vary– LOC: Internal Chinese is external New Zealander

Randomly Assign Groups

• Identify multiple groups• Randomly assign to conditions• Groups need to be isolated

– Contamination of control by treated group– Contamination of control by supervisors who

know about study

• Creates potential levels issue– Subjects nested in groups

Case-Control Design• Compare sample meeting criterion with

sample not meeting• Must match to same population

– Employees who quit vs. all other employees– Employees who were promoted vs. other

employees– CEOs vs. line employees– Employees assaulted/bullied vs. others

• Assess other variables to compare

Case-Control• Typically we have the case sample at hand• Controls may not be easily accessible• Often cases compared to a “normal”

population– Cancer patients vs. norms for general public

• Could compare cases in organization with employees in general– E.g., absence from case group vs. absence rate

in company

Is Case-Control Useful To Us?

• What might we use this design to study in Organizations?

• What is the Case Group?• What is the Control Group?• What variables do we compare?

Limits To Case-Control Design

• Defining groups from same population• Effect size uncertain

– All cases have X– Small proportion of people with X are cases– Asymmetrical prediction

• Groups may differ on more than case variable• Retrospective assessment of supposed cause

– Quitting caused report of lower satisfaction

Week 6: Design Issues

• Experimental Design– Random assignment– Creation of conditions

• Randomized experiment– Time sequence built into design– Still must rule out plausible alternatives

• Construct validity of IV and DV• External validity for lab studies

• Is “real science” so real?

Random Assignment

• Random sample (external validity)• Random assignment (internal validity)

– Probability of assignment equal– Expected value of characteristics equal– Not all variables equal

• Type 1 errors

• Faith in random assignment• Differential attrition

What Are We Really Assigning To?

• Encouragement Design– Ask subjects to do certain things

• What features of complex condition are critical?

• Confounds in IV

Control Groups

• No treatment• Waiting list• Placebo treatment• Currently accepted treatment• Comparisons to isolate variables

Bias In Experiments

• Construct validity of DV and IV• Bias in Assessment of DV• Bias/confounding in IV• Bias affects

– Subjects– Experimenters– Samples– Conditions (Contamination and Distortion)– Designs– Instruments

Humans Used As Instruments

• Self vs. other reports• Bias in judgments of others

– Schema & stereotypes– Implicit theories– Attractiveness

• Pretty blondes are dumb

– Physical ability• Athletes are dumb

– Height• Tall are better leaders

Demand Characteristics

• Implicit meaning of experimental condition• IV not accurately perceived• Subject motivated to do well• Subject tries to figure out experiment

– Response not natural for situation

Lie Detection Lab Vs. Field• Lab Study• Two trials of detection• Detect Trial 1, Harder to detect Trial 2• Not detect Trial 1, Easier to detect Trial 2• Opposite to field experience• Hypothesized that motive important

– Want to fool—being detected makes it easier to detect

– Get caught—being detected makes it harder to detect

Lie Detection Study 2• 2 trials x 2 conditions

– Told intelligent can fool• Anxious when caught

– Told sociopath can fool• Relax when caught

Percent Detected Trial 2

Motive Detect Trial 1 Not Detect Trial 1

Fool (intelligent) 94%Anxious

19%Relaxed

Catch (sociopath) 25%Relaxed

88%Anxious

Subject Expectancies

• Hawthorne Effects– Knowledge of being in an experiment– Does this really happen?

• Placebo Effects• Blind procedures

Experimenter Effects• Observer Errors

– Late 1700s Greenwich Observatory– Maskelyne fires Kinnebrook for errors– Astronomer Bessel: Widespread errors– About 1% of observation have errors– 75% direction of hypothesis

• Experimenter expectancy—self fulfilling prophecy

• Clever Hans• Dull/Bright rat study• Double blind procedure

Experimenter Behavior• Smiling at subjects

– 12% at males– 70% at females

• Mixed gender S-E longer to complete• Videotape of S-E interactions (Female E)

Auditory Visual

Male subject Friendly Nonfriendly

Female subject Nonfriendly Friendly

Cross-Sectional Design

• All data at once• Variables assessed once• Most common design in I/O & OB/HR• Often done with questionnaires• Can establish relationships• Cannot rule out most threats• Cheap and efficient• Good first step

Data Collection Method

• Ways of collecting data– Self-report questionnaire

• Formats

– Interview• Degree of structure

– Observation• Behavior checklist vs. rating

– Open-ended questionnaire

Data Source

• Incumbent• Supervisor• Coworker• Significant other• Observer• Existing materials

– Job description

Single-Source

• All data from one source• Usually also mono-method• Usually survey• Many areas usually self-report

– Well-being

• Some area other-report– Performance

Multi-Source

• Same variables from different sources– Convergent validity– Confirmation of results

• Different variables from different sources– Rules out some biases and 3rd variables– Some biases can be shared

• Not panacea

Multisource Discriminant Validity

Study Variables Self-Report

Other-Report

Dalal 2005 meta CWB-OCB -.12 -.60

Spector et al. 2010

CWB-OCB -.00 -.42

Spector, Fox 2005 Autonomy-Job Characteristics .54 .67

Glick et al. 86 Job Characteristics-Satisfaction .024 .598

Note: Dalal meta-analysis; Spector, Fox mean correlation across 4 job characteristics; Glick et al. multiple correlation.

Bias Can Affect All Raters

• Self-Efficacy—outward signs of confidence– Gives impression of effortless performance– Coworker perception of employee’s constraints

• Doesn’t appear to have constraints

– Supervisor perceptions of performance• Looks like a great performer

Self-EfficacySelf-report

ConstraintsSelf-report

Job PerformanceSelf-report

Self-EfficacySelf-report

ConstraintsCoworker

Job PerformanceSelf-report

Self-EfficacySelf-report

ConstraintsCoworker

Job PerformanceSupervisor

Week 7: Longitudinal Designs

• Design introducing element of time• Same variable measured repeatedly• Different variables separated in time

– Turnover

• How much time needed to be longitudinal?

Advantages of Longitudinal Design

• Can establish relationships• Can sometimes establish time sequence• Can rule out some plausible alternatives

– Some biases– Occasion factors– Mood

Proper Time Sequence

• Before and after an event– Turnover

• Precursors assessed prior

– Job satisfaction• Difficult to know when satisfaction occurred

• Arbitrary points in time not helpful– Steady state results same as cross-sectional

Predicting Change

• Showing that X predicts change in Y• Relation of X & Y controlling for prior

levels• Weak evidence for causality• Regression to mean effects• Basement/ceiling effects

Attrition Problem

• Attrition between time periods– From organization– From study

• Attrition not random• Mean change due to attrition• Interaction of attrition and variables

– Those most/least affected quit

Practical Issues

• Tracking subjects• Matching responses

– Loss of anonymity– Use of secret codes

• Subject might not remember– Anonymous identifiers

• First street lived on• Name of first grade teacher• Grandmother’s first name

• Participation incentives• Time to complete study

Pretest-Posttest Design

Single Group

O1 O2

Two Group

O1 X O2

O1 O2

Trends Over Time

Single group Time Series

O1 O2 O3 X O4 O5 O6

Multigroup Time Series

O1 O2 O3 X O4 O5 O6

O1 O2 O3 O4 O5 O6

Discontinuity

• Change in trend around X• Single group

– Can’t rule out other causes

• Multigroup– Control group to rule out alternatives

Zapf et al.

• Stress area• Relationships small over time• Inus conditions

– Strains caused by 15 factors– Each accounts for 7% of variance– .26 correlation if measurement perfect

• Attrition of least healthy• Relationships not always linear• Choose appropriate time frame

Longitudinal Multi-Group Design• Identify classification variable

– Assaulted, Smoking

• Assess two times• Group employees

– Yes/Yes– Yes/No– No/Yes– No/No

• Compare groups

Manning, M. R., Osland, J. S., & Osland, A. (1989). Work-related consequences of smoking cessation. Academy of Management Journal, 32, 606-621.

Manning DesignTime 1: Smoke Time 1: No Smoke

Time 2: Smoke Smokers Starters

Time 2: No Smoke Quitters Nonsmokers

Yang DesignTime 1: Assaulted Time 1: Not Assaulted

Time 2: Assaulted Constant strain Strain increased

Time 2: Not Assaulted Strain decreased (recovery)

Constant strain

Experience Sampling

• Diary Study• Multiple measures on same person

– Daily– Multiple times per day– 1-2 weeks

• Look at within person variation– Changes in DV as a function of IV

Experience Sampling Analysis

• Hierarchical linear modeling (HLM)– Level 1 within person– Level 2 between person

• Multiple regression– DV2

= IV1 + DV1

– Time 1 IV on Time 2 DV control for Time 1 DV– To see if change in DV is associated with IV

HLM

• Deals with hierarchical structure of data– Observations nested– Individuals in groups, departments, organizations

• Experience sampling– Observations nested in people– Separates variance into between versus within– Analysis of within person change

• Relationship of fluctuations of IV vs. DV

Curvilinear Stressor-Strain

• Two studies– CISMS2: Anglo n = 1470

• Spector-Jex, 1991, n =232• Stressors

– Conflict, Constraints, Role ambiguity, Workload

• Strains– anxiety, anger, depression, frustration, intent, job

satisfaction, symptoms

Analyses

• Curvilinear regression• Strain = Stressor + Stressor2

• Plot by substituting values of Stressor– Similar to plotting moderated regression

Example• Y = 10 - 2X + .2X2

• X ranges from 0 to 20• Substitute values 5 points apart (0, 5, 10,

15, 20)

ComputationsX b1X

(b1 = -2)

X2 b2X2

(b2 = .2)

b1X+b2X2 10+b1X+b2X2

0 0 0 0 0 10

5 -10 25 5 -5 5

10 -20 100 20 0 10

15 -30 225 45 15 25

20 -40 400 80 40 50

Illustration of Curvilinear Regression

0

10

20

30

40

50

60

0 5 10 15 20 25

X

Ŷ

Results

• Significance for workload• Limited significance for

– Conflict– Constraints– Role ambiguity

Strain CISMS Spector-Jex Direction

Anxiety ns * U

Frustration -- * U

Intent * * U

Job Satisfaction

* * Inverted U

Symptoms * ns U

Week 8: Field Research and Evaluation

• Field Research– Done in naturalistic settings

• Experimental• Quasi-experimental• Observational

• Evaluation – Organizational Effectiveness– Figuring out if things work

• Organizations• Programs• Interventions

Challenges To Field Research

• Access to organizations/subjects• Lack of control

– Distal contact with subjects (surveys)– Who participates– Contaminating conditions

• Participants discussing study

• Lack of full cooperation• Organizational resistance to change

Creative and Varied Approaches

Accessing Subjects

• Define population needed for your purpose– People– Jobs– Organizations

• List likely locations to access populations• Consider ways to access locations

Defining Populations: People• Characteristics of people

– Demographics• Age, Education, Gender, Race

– KSAOs – Occupations

• Do variables of interest vary across occupations?• Single or multiple occupations• Single controls variety of factors• Multiple

– More variance– Tests of occupation differences– Greater generalizeability

Defining Populations: Organizations• Characteristics of organizations needed

– Occupations represented– Characteristics of people represented– Characteristics of practices

• Single versus multiple organizations– Single adds control– Multiple adds

• Variance• Tests of organization characteristics• Generalizeability

Accessing Participants: Students

• Psychology student subject pool• Employed students in classes, e.g., night• Advantages/Limitations

– Easily accessed on many campuses– Cheap– Cooperative– Younger and more educated than average– Heterogeneous jobs/organizations– Often part-time and temporary jobs– Potential work-school conflict

Accessing Participants: Nonstudents

• Organizations: Access can be a problem• Association mailing lists: Single

occupations• Web search: Government employees• Clubs, churches, nonwork organizations• Unions• General surveys

– Phone, mail, door-to-door, street corner

Approaching Organizations

• Sell to management• Appeal to value of science not ideal• What’s in it for them?• Partnership

– Free service: Employee survey, job analysis– Address their problem– Piggyback your interest

Modes of Approach• Personal contact: Networking

– Give talks to local managers, e.g., SHRM– Students in class– Approach based on known need

• Hospitals and violence

• Consider the audience– Psychologist vs. nonpsychologist– HR vs. nonHR– Level of sophistication about problem– Don’t assume you know more than organization

about their problem

Project Prospectus

• One page nontechnical prospectus– Purpose: Clear and succinct– What you need from them– What it will cost (e.g., staff time)– What’s in it for them– What products you will provide to them– Timeline

Example

• Determine factors leading to patient assaults on nurses in hospitals

• Need to survey 200 nurses with questionnaire• Questionnaire will take 10-15 minutes

– Can be taken on break or home

• Will provide report to organization about– How many nurses have been assaulted– The impact of the assaults on them– Factors that might be addressed to reduce the problem

• Would like to conduct study next month, and provide report within 60 days of completion.

Partnerships

• Academics and nonacademics• Projects come from mutual interests• Piggyback onto organizational project• Internship

– Johannes Rank’s training evaluation

• Issues– Proprietary results– Organizational confidentiality

Program Evaluation/Organizational Effectiveness

• Program Evaluation– Education– Human Services– Determining if program is effective

• Organizational Effectiveness– More generic– Determining effectiveness of organization– Determining effectiveness of activity/unit

Formative Approach

• Focus on processes• Often used in developmental approach• Can be qualitative• Can be quantitative• Action research

– Identify problem, try solution, evaluate, revise

Summative Approach

• Assess if things work• Often quantitative• Experimental or quasi-experimental design• Compare to control group/s• Utility

– Return on investment (ROI)• Private sector• Profitability

– Cost/outcome (bang for buck)• Military—literally• Nonmilitary—cost/unit of outcome

Steps In Determining Effectiveness

1. Define goals/objectives

2. Determine criteria for success

3. Choose design1. Single group vs. multigroup

4. Pick measures

5. Collect data

6. Analyze/draw conclusion

7. Report/Feedback

8. Program improvement

Week 9Survey Methods & Constructs

• Survey methods• Sampling• Cross-cultural challenges

– Measurement equivalence/invariance

• Reflective Vs. Formative scales• Artifactual constructs

Survey Settings• Within employer organization• Within other organization

– University– Professional association– Community group– Club

• General population– Phone book– Door-to-door

Methods

• Questionnaire– Paper-and-pencil– E-mail– Web

• Interview– Face-to-face– Phone– Video-phone– E-mail ?– Instant Message ?

Population

• Single organization• Multiple organizations

– Within industry/sector

• Single occupation• Multiple occupations• General population

– Employed students

Sample Versus Population

• Survey everyone in population vs. sample– Single organization or unit of organization

• Often survey goes to everyone– Multiple organizations

• Kessler: All psychology faculty– Other organization

• Professional association• Often survey everyone

– General population

Sampling Definitions

• Population – Aggregate of cases meeting specification– All humans– All working people– All accountants– Not always directly measurable

• Sampling frame – List of all members of a population to be sampled– List of all USF support personnel

Sampling Definitions cont.

• Stratum – Segment of a population• Divided by a characteristic

– Demographics• Male vs. female

– Job level• Manager vs. nonmanager

– Job title– Occupation– Department/division of organization

Representativeness of Samples

• Representative– Sample characteristics match population

• Non representative– Sample characteristics do not match population

• Some procedures more likely to yield representative samples

Nonprobability Sampling

• Nonprobability sample – Every member of population doesn’t have equal chance

• Representativeness not assured• Types

– Accidental or convenience– Snowball– Quota – Accidental but stratified

• Choose half male/female

– Purposive – Handpick cases that meet criteria• Pick full-time employees in a class

Probability Sampling

• Random sample from defined population• Stratified random sample

– More efficient than random

• Cluster or multistage– Random selection of aggregates

• Select organizations stratified by industry

International Research Methods• Cross-cultural vs. cross-national (CC/CN)• Purposes

– Research within a country/culture (emic)– Generalize finding/theory– Compare countries/cultures (etic)– Test culture hypotheses across groups defined

by culture CC/CN differences• Within country• Across countries• Across regions

– North America vs. Latin America

Challenges of CC/CN Research

• Equivalence of samples• Measurement Equivalence/Invariance (MEI)

Sample Equivalence

• What is it about samples that causes differences?• Confounding of country with sample characteristics

– Occupations• Can vary across countries

– Industry sectors• Private sector doesn’t exist universally

– Organization characteristics– People characteristics (e.g., demographics)

• Gender breakdown differs across countries

Instrument Issues• Linguistic meaning

– Translation – Back-translation

• Calibration– Numerical equivalence– Cultural response tendencies

• Asian modesty• Latin expansiveness

• Measurement equivalence/invariance– Construct validity– Factor Structure

Measurement Equivalence/Invariance MEI

• Construct Validity– Same interpretation across groups

• SEM and IRT approaches– Based on item inter-relationship similarity– Factor structure– Item characteristic curves

SEM Approach

• Equality of item variance/covariance• Equal corresponding loadings• Form invariance

– Equal number of factors– Same variables load per factor

IRT Approach

• Equivalent item behavior• For unidimensional scales• Better developed for ability tests• Often conclusion similar to SEM

Eastern versus Western Control Beliefs at Work

Paul E. Spector, USFJuan I. Sanchez, Florida International University

Oi Ling Siu, Lingnan University, Hong KongJesus Salgado, University of Santiago, Spain

Jianhong Ma, Zhejiang University, PRC

Applied Psychology: An International Review, 2004

Background

• Cross-cultural study of control beliefs• Americans Vs. Chinese• Locus of control beliefs vary

– Chinese very external vs. Americans– Suggests Chinese passive view of world– Look to others for direction

Primary Vs. Secondary Control

• Primary: Direct control of environment• Secondary: Adapt self to environment

– Predictive: Enhance ability to predict events– Illusory: Focusing on chance, i.e., gambling– Vicarious: Associate with powerful others– Interpretive: Looking for meaning

• Asians more secondary

Rothbaum, Weisz, & Snyder

Socioinstrumental Control

• Control through social networks• Build social networks• Cultivate relationships

Juan Sanchez

Purpose

• Develop new control scales– Secondary control– Socioinstrumental control

• Avoid ethnocentricism by using international item writers

Pilot Study Method

• Develop definitions of constructs• International team wrote 87 items

– 1 American, 2 Chinese, 2 Spanish

• Administered 126 Americans• Item analysis

Sample Items

Secondary• I take pride in the accomplishments of my superiors at work

(Vicarious control)•  In doing my work, I sometimes consider failure in my work

as payment for future success (Interpretative control)

 

Socioinstrumental• It is important to cultivate relationships with superiors at

work if you want to be effective• You can get your own way at work if you learn how to get

along with other people

Pilot Study Results

• Secondary control scale– 11 items– Alpha = .75– r = -.44 Work LOC

• Socioinstrumental control 24 items– Alpha = .87– r = .26 Work LOC

• Two scales r = .12 (nonsignificant)

Main Study Method

• Subjects from HK, PRC, US– Ns = 130, 146, 254– Employed students & university support– US from FIU and USF

• Work LOC & New Scales• Stressors

– Autonomy, conflict, role ambiguity & conflict

• Strains– Job satisfaction, work anxiety, life satisfaction

Coefficient Alphas

Scale HK PRC US

Secondary .87 .70 .76

Socio-instrumental

.91 .88 .91

Mean Differences

Variable HK PRC US R2

Second 43.8A 46.0B 45.6B .02

Socio 93.4A 97.1B 91.9A .01

Work LOC

51.0B 57.0C 40.2A .38

Correlations With Work LOC

Variable HK PRC US

Secondary .33 -.55 -.21

Socio-instrumental

.51 -.59 .23

Significant CorrelationsVariable HK PRC US

Secondary Job sat, Autonomy

Job sat All

Socio-instrumental

Role conflict Job sat Autonomy

Work LOC Job sat, Autonomy,

Conflict

None All

Conclusions

• Procedure created internally consistent scale

• Little mean difference China vs. US• Work LOC huge mean difference• Relationships different across samples

Nature of Indicators

• Reflective Vs. Formative• Determines meaningful statistics• Affects conclusions

Reflective or Effect Indicator

• Indicator caused by or reflects underlying construct

• Change in construct Change in indicators• Classical test theory• Measures of attitudes and personality• Needs internal consistency• Factor analysis meaningful…..sometimes

Formative or Causal Indicator• Indicator defines underlying construct• Items don’t reflect single construct• Items not interchangeable• Change in indicator Change in construct• Examples

– Socio-economic status• Education, Income, Occupation Status

– Behavior checklist (CWB or OCB)– Symptom checklist

• Internal consistency not always high• Factor analysis might not be meaningful

Formative Indicator Example:Personality and CWB

• Trait anger and trait anxiety: Spielberger STPI• CWB: Counterproductive Work Behavior Checklist• N = 78 miscellaneous employees, community• Trait anger & CWB r = .37• Trait anxiety & CWB r = .30• Can we assume anger & anxiety relate to all behaviors?

Fox, Spector, Miles, 2001, Journal of Vocational Behavior

Item Trait Anger Trait Anxiety

Purposely wasted your employer’s materials/supplies .09 .08

Daydreamed rather than did your work .45* .42*

Complained about insignificant things at work .52* .34*

Purposely did your work incorrectly .07 .11Came to work late without permission .08 .10Stayed home from work and said you were sick when you weren’t .20 .13

Purposely damaged a piece of equipment or property .14 .04

Purposely dirtied or littered your place of work -.02 .19

Stolen something belonging to your employer .16 -.08

Took supplies or tools home without permission .05 -.03

Tried to look busy while doing nothing .34* .36*

Took money from your employer without permission .08 -.05

How Do You Know?

• Theoretical interpretation• Are items equivalent forms of construct?• Do items correlate?• Time sequencing—which changed first?

– Does increase in SES affect education and income equally?

• No statistical test exists• No automatic inference machine

Artifactual Constructs:Overinterpretation of Factor Analysis

• Tendency to assume factors = constructs• If items load on different factors they reflect

different constructs• Sometimes item characteristics are

confounded with factors– Wording direction

General Assumptions About Item Relationships

• Related items reflect same construct• Unrelated items reflect different constructs• Clusters of items reflect the same construct• Factor analysis is magic

Dominance Model Assumptions About Measurement

• People agree with items in direction of position– If I have a favorable attitude, I will agree with all

favorable items• People disagree with items opposite to direction of

position– If I have a favorable attitude, I will disagree with all

unfavorable items• Responses to oppositely worded items are a mirror

image of one another– If I moderately agree with positive items, I will

moderately disagree with negative items

Ideal Point Principle: Thurstone

• Items vary along a continuum.• People’s positions vary along a continuum• People agree only with items near their

position• Oppositely worded items not always mirror

image• Items of same value relate strongly• Items of different value relate weakly

Agree

Disagree

Item value on construct continuum

Person

Ideal Point Principle

Pessimism Optimism

A B C D E

Difficulty Factors

• Ability tests• Items vary in difficulty• Items of same difficulty relate well

– Those who get 1 easy will tend to get all easy

• Items of varying difficulty relate less well– Those who get hard tend to get easy– Those who get easy don’t all get hard

Example

People Easy Items Hard Items

Low Ability 80% Correct 0% Correct

High Ability 100% Correct 80% Correct

Effects On Statistics

• Easy items strongly correlated• Hard items strongly correlated• Easy items relate modestly to Hard• Factor analysis produces factors based on

difficulty• Difficulty factors reflect item characteristics

not people characteristics

Summated Ratings

• Items of same scale value relate strongly• Items of different value relate modestly• Oppositely worded items have different scale values• Scatterplots triangular not elliptical

– High-Low, Low-High, and Low-Low common– Few High-High

• Often distributions are skewed• Mixed value items produce factors according to scale

value of item• Might not reflect underlying constructs

Plot of Moderate Positively Vs. Negatively Worded Job

Satisfaction Items

Scatterplot of Moderate Worded Job Satisfaction Items

6

11

16

21

26

31

36

41

46

6 11 16 21 26 31 36 41 46

Negatively Worded Items

Pos

itive

ly W

orde

d It

ems

Plot of Extreme Positively Vs. Negatively Worded Job

Satisfaction Items

6

11

16

21

26

31

36

41

46

6 11 16 21 26 31 36 41 46

Extreme Negatively Worded Item Score

Ext

rem

e P

ositi

vely

Wor

ded

Item

Sco

re

Conclusions

• Be wary of factors where content is confounded with item direction

• Be wary when assumption of homoscedasticity is violated

• Be wary when items are extremely worded• More evidence than factor analysis

Week 10Theory

What Is A Theory?

• Bernstein– Set of propositions that account for, predict and

control phenomena• Muchinsky

– Statement that explains relationships among phenomena

• Webster– General or abstract principles of science– Explanation of phenomenon

Types of Theories

• Deductive– Starts with theory– Data used to support/refute theory

• Inductive– Starts with data– Theory explains observations

Deduction

• Reasoning from premise to case• Example

– All doctoral students own laptops– Chris is a doctoral student– Therefore: Chris owns a laptop

Induction

• Reasoning from observed cases to all cases• Example

– All doctoral students in my class have laptops– Therefore: All doctoral students have laptops

Inference To Best Explanation

• Conclusion based on likelihood• Example

– Apartment lock broken and laptop missing– Most feasible

• Must have been a thief

– Alternative• Roommate stole and then faked break-in

What Does Probability Mean?

• Frequency– .25 chance of coal miners get lung disease– 25% of coal miners will get lung disease

• Subjective Interpretation– .01 chances of finding life on Mars– Does not mean that 1% of Mars-like planets

have life– Our confidence in finding life

Induction and Probability

• Inductive approach based on chance our inference is correct

• Inferential statistics– Conclusion based on sample data– Generalizing to broader population

Covering Law Model of Science• Hempel• Covering Law

– General laws– Particular facts– Phenomenon

• Begin with research question– Why did my plant die?– Plants need water.– I didn’t water my plant.– Therefore: My plant died.

Advantages

• Integrates and summarizes large amounts of data

• Can help predict• Guides research• Helps frame good research questions

Disadvantages

• Biases researchers• “Theory, like mist of eyeglasses, obscures

facts” (Charlie Chan in Muchinsky)• “Facts are the enemy of truth” (Levine’s

boss)• A distraction as research does not require

theory (Skinner)

Hypothesis

• Statement of expected relationships among variables

• Tentative• More limited than a theory• Doesn’t deal with process or explanation

Model

• Representation of a phenomenon• Description of a complex entity or process

– Webster

• Boxes and arrows showing causal flow

Observable Vs. Unobservable

• Some phenomena directly observable– Easily detectable with our senses– E.g., some behaviors

• Some phenomena unobservable– Inferred indirectly– E.g., internal cognitive states

• Theories about unobservable are underdetermined– Competing theories can explain observations

Theoretical Construct

• Abstract representation of a characteristic of people, situation, or thing

• Building blocks of theories

Paradigm

• Accepted scientific practice• Rules and standards for scientific practice• Law, theory, application and

instrumentation that provide models for research.– Thomas Kuhn

What Are Our Paradigms?

• Behaviorism?• Environment-perception-outcome approach• Surveys

Structure of Scientific RevolutionsThomas Kuhn

“An apparently arbitrary element, compounded of personal and historical accident, is always a formative ingredient of the beliefs espoused by a given scientific community at a given time.”, p. 4

“research as a strenuous and devoted attempt to force nature into the conceptual boxes supplied by professional education.”, p. 5

History of Theory in Psychology• Behaviorism: Rejection of theory

– More consistent with natural science– Avoid the unobservable– Dustbowl empiricism criticism

• “Cognitive revolution”: Embracing models and theory– Unobservables commonly studied

• Organizational research– Theory as paramount

• The empiricists strike back?– Hambrick and Locke

Current State of Theory

• Almost required in introductions– Marginalize importance of data– Ideas more important than facts

• Scholarship vs. Science– Scholarly writing—making good arguments– Scientific writing—describing/explaining

phenomena based on data

Misuse of Theory

• Posthoc: Pretending theory drove research• Citing theories as evidence• Claiming hypothesis is based on a theory it

is not based on• Sprinkling cites to irrelevant theories

– (Sutton & Staw)

Example from Stress Research

• Hobfol’s Conservation of Resources Theory– People are motivated to acquire and conserve

resources– Demands on resources and threats to resources

are stressful

• People routinely cite COR theory in support of stressor-strain hypotheses– No measure of resources or threat– Using a theory to support a hypothesis that does

not derive from the theory

Why Do People Do This?

• Pressure for theory• Everyone else is doing it

– Descriptive norms

• Think this is real science• Playing the game

Backlash

• Increasing criticism of the obsession with theory– Hambrick & Locke– Harry Barrick: Half-life of models in cognitive– AOM sessions– One unnamed reviewer– Informal interactions

Proper Role of Theory in Science

• Goal of science is to understand the world• Science is evidence-based not intuition-based

– Data is the heart of science– Theory is current state of understanding how/why

things work

• Theory is the tail not the dog• There is a place for both empiricism and theory

Natural Science

• More focused on data• Longer timeframe

– Decades and centuries of data before theory

• “Social science theory a smokescreen to hide weak data” USF chemist

Levels of Explanation

• Atomic or chemical• Neural• Individual cognitive• Social• Difficult to generalize across levels

– Deduce properties from one to another

• Lower level is not more “scientific” or valuable

Social Cognitive Cog/Neuro Neuroscience Behavioral GeneticsI/O

I/O is justappliedSocial.

Social is just appliedCognitive.

Cognitive is just appliedCog/Neuro.

Cog/Neuro is Just appliedNeuroscience. It’s nice to be on top.

Use Theory Properly

• Hypotheses: Explicitly derive from a theory• Don’t claim support from a theory• Often better to mention theories in the

discussion• Multiple theories useful in comparative test

Additional Reading

• Accessible overview of philosophy of science

• Okasha, S. (2002). Philosophy of Science. Oxford, UK: Oxford University Press.

Week 11Levels of Analysis

Level

• Nature of the sampling unit– Person– Couple– Family– Group/Team– Department– Organization– Industry sector– Country

Individual Vs. Higher Level

• Most psychological constructs person level– Attitude– Performance

• Some constructs higher (aggregate) level– Organizational climate– Team performance

Types of Aggregates

• Sum of individuals– Sales team performance

• Consensus of individuals– Mean of individuals– Majority votes

• Aggregate level data– Job analysis observer ratings for job title– Organization profitability– Team characteristic (size, gender breakdown)– Turnover rates

Aggregate As Sum of Individuals

• Sum individual characteristics– Ask individuals about own values– Sum values

• Direct assessment of aggregate– Ask individuals about people in their unit– “How do your team members feel about…?”– Sum values

Aggregate As Consensus

• Shared perceptions– Climate

• People within unit should agree• Assess extent of agreement

– Intraclass correlation, ICC(1)– Rwg

Partition Variance

• Total variation among subjects• Between variation among group means• Within variation among subjects within

groups• Total = Between + Within

Intraclass Correlation ICC(1)

• Multiple raters of multiple targets• Extent of within rater agreement• Var(Total) = Var(Between target) + Var(Within target)

• ICC = Var(Between) / Var(Total)

Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86, 420-428.

Rwg

• Compares observed to expected (by chance) variance of ratings

• Rwg = (Var(expected) – Var(Obs)) / Var(Expected)

• Var(Expected) = (A2 – 1)/12– A = Number of rating categories– Assumes uniform distribution of responses

James, L. R., Demaree, R. G., & Wolf, G. (1984). Estimating within-group interrater reliability with and without response bias. Journal of Applied Psychology, 69, 85-98.

Example• 10 Raters: 5, 2, 3, 5, 2, 3, 1 ,4, 3, 4• A = 5 (5 rating choices)• Var(Expected) = (52 – 1)/12 = 2

• Var(Observed) = 1.73

• Rwg = (2 – 1.73) / 2 = .135

Independence of Observations

• Independence a statistical assumption• Subjects nested in groups

– Subjects influence one another– Observations non-independent

• Example– Effects of supervisory style on OCB– Subjects nested in workgroups– Style ratings within supervisor nonindependent– Subjects influence one another– Shared biases

Confounding of Levels

• Individual case per unit– One person per organization

• Nonindependence issue resolved• Confounding of individual vs. organization

– Is relation due to individual or organization?• Potential problem for inference

– Self report of satisfaction and org performance– Could be shared bias—happy employee reports

greater performance

Ecological Fallacy

• Drawing inferences from one level to another• When measurement and question don’t match

– Job satisfaction vs. group morale– Individual behavior vs. group behavior

• Improper inference• Can’t draw conclusions across levels• Empirically data only reflect own level

If it works for individuals, why won’t it work for groups?

Correlation and Subgroups

Individual: No correlation

Individual: No correlation, Group positive correlation

Individual within group no correlation; Group positive

Individual within group no correlation; Group negative

Individual within group positive correlation: Group none

Individual within group positive correlation; Group negative

Pay and Job Satisfaction

• Question 1: Job level– Do better paying jobs have more satisfied

people?

• Question 2: Individual level– Are better paid within jobs more satisfied?

Job Satisfaction

Salary

Nurses Physicians

• Pay-Job Satisfaction correlation•Mixed jobs r = .17 (Spector, 1985)• Single job r = .50 (Rice et al. 1990)

Pooled Within-Group CorrelationRemove Effects of Group Differences in

Means

Variance/Covariance

• SS = Σ(X-GMX)2

• = Σ(X-GMX)(X-GMX)

• SP = Σ(X-GMX)(Y-GMY)

Group Versus Grand Means

Group 1

Group 1

Grand Mean Group 2

Grand Mean

Group 2

..

.

Pooled Within-Group CorrelationRemove Effects of Group Differences in

Means

Correlation

xySP

ySSxSSxyr

Pooled WG 2211)

21)(

21(

yxSP

yxSP

ySS

ySS

xSS

xSSxyr

where 1 and 2 refer to groups 1 and 2

Multi-level Studies

• Lower to higher levels numbered• Multiple teams across multiple

organizations in multiple countries– Level 1 = person– Level 2 = team– Level 3 = organization– Level 4 = country

Two-Level

• Most studies two-level• Need multiple units at each level for

adequate power• Can analyze within level or across

– Main effects for people and for teams– Interaction of level 1 with level 2

Experience Sampling or Daily Diary

• Multiple observations on same person– Daily– Multiple times/day

• Two-level– Level 1: Within person– Level 2: Between person

• Can test if IV fluctuation leads to DV fluctuation within people

• Can look at DV before and after event

Hierarchical Linear Modeling

• Statistical technique• Deals with nonindependence• Analyze data at two or more levels

– Individuals in teams– Teams in organizations

• Cross levels– Does team-level diversity affect relation

between individual satisfaction and OCB?– Does individual level satisfaction affect team

level performance?

Levels of Behavior Aggregation

• How should behaviors be combined?

Overall Index

• Sum of multiple behaviors• Skarlicki-Folger Retaliation = 17 items• Bennett-Robinson Deviance = 24 items• Spector-Fox Counterproductive Work

Behavior Checklist (CWB-C) = 45

Distinguish Target• Robinson-Bennett 1995

– Organization versus person target– Bennett-Robinson deviance scale– CWB-C

Bennett, R. J., & Robinson, S. L. (2000). Development of a measure of workplace deviance. Journal of Applied Psychology, 85, 349-360.

Robinson, S. L., & Bennett, R. J. (1995). A typology of deviant workplace behaviors: A multidimensional scaling study. Academy of Management Journal, 38, 555-572.

Five Dimensions of CWB-C• Abuse• Production Deviance• Sabotage• Theft• Withdrawal

Spector, P. E., Fox, S., Penney, L. M., Bruursema, K., Goh, A., & Kessler, S. (2006). The dimensionality of counterproductivity: Are all counterproductive behaviors created equal? Journal of Vocational Behavior, 68, 446-460.

More Dimensions• Abuse

– Physical– Verbal

• Production deviance– Loafing– Damaging

• Sabotage• Theft

– From coworker– From Organization

• Withdrawal

Even Finer Grained

• Abuse– Physical

• Weapon• No Weapon

– Slap– Punch

• Injure vs. not• Injury needs medical treatment vs. not• Reactive vs. proactive

Does It Matter?

• CWB-C– 45 items

• Job Satisfaction• N = 312

Correlation With Job Satisfaction• Total -.32*

– CWB-organization -.35*

– CWB-person -.19*

• Abuse -.31*

• Withdrawal -.22*

• Production deviance -.19*

• Sabotage -.14*

• Theft -.05– Individual Items 40% Significant

Item r

Ran down employer to others -.60

Called in sick when not -.24

Worked slowly on purpose -.22

Insulted someone about their work -.20

Stole from employer -.08

Purposely dirtied or littered workplace -.03

Purposely damaged equipment -.02

Spread rumor -.02

Conclusion

• Behavior checklist is formative not reflective– Items are not interchangeable– High levels of composite do not mean high

levels of all components– Same score from different sets of behaviors

• Do not cross levels with inference– Assume components have same relation as

composite

Week 12Statistical Inference

Null Hypothesis Signficance Testing

• Purposes– Reduce subjectivity– Agreed upon rules for claims– Based on consistent metric– Communicates in common language

• Based on probability

What Does Significance Mean?

• Results unlikely due to chance• If the null were true, observed results would

be expected less that 5% of the time• Null hypothesis is a comparison standard

– There is a zero relationship between variables– Alternative: There is a nonzero relationship

Limitations

• Sensitive to sample size– Small samples low power– Large samples small effects can be significant

• Problem is interpretation not the method itself

• Does not indicate probability that point estimates are correct.– Point estimates have confidence intervals

• Failure to find significance does not mean null is true

Critics’ Claims

• Can’t be certain true effect is zero if test is nonsignificant.– So what?

• Promotes dichotomous thinking– This is limitation of researcher not statistics

• Significance tests are misused.

Suggested Alternative

• Report confidence intervals and effect sizes– Can do whether or not you test significance

• Report CI and show that it does or does not include zero– How is this different from reporting

significance?

Benefits of the Controversy

• Emphasis on appropriate use of significance testing

• More complete reporting– Effect sizes– Confidence intervals

Rule of Thumb Cutoffs

• Situations where significance tests are unavailable– Cannot reduce situations to probabilities– Based on opinions of authorities

• Jum Nunnally• Peter Bentler• Larry James

– Past experience– Simulations

Become Widely Accepted

• Internal consistency (Nunnally)– Coefficient alpha (α ≥ .70)– Basement for new research– Modified from 1st edition (α ≥ .60)

• SEM fits statistics (Bentler & Bonett)– Criterion ≥ .90– < .90: model can be substantially improved– Many fit statistics with different properties

Lance, Butts, Michels 2006

• Close look at basis for cutoffs• Attributed sources said something more

complex– Nunnally– Bentler & Bonett

• Attributed source didn’t really say– James

• No specific source– Eigenvalue > 1 criterion

Four Criteria• Well established

– Coefficient alpha ≥ .70– SEM fit stats ≥ .90

• Other criteria sometimes used (e.g., .95)

• See occasionally– RWG ≥ .70

• No longer very common– Eigenvalue > 1– Sometimes used as a starting point– Scree more common

Statistical and Methodological Myths and Urban Legends

• SMA 2003– “Three Authors Speak Out On the Peer Review

Process”• Art Bedeian, Mark Martinko, Paul Spector, Bob

Vandenberg• Discussion afterwards with Larry Williams

• AOM 2004• First SMMUL• Standing room only

• SIOP (5 times)

SMMUL Publication

• Organizational Research Methods– Special feature

• 2006• 2011

• Lance & Vandenberg 2009 edited book SMMULs, NY: Routledge

Decimal Dust

• Bedeian et al.• Misplace precision in reporting statistics• Point estimates are not precise, given

typical sample sizes• Don’t over-report significant digits• Correlation: 2 digits• Means and SDs: 1 digit right of decimal

Statistical Power

• Probability of finding a true effect• Type II error

– Failure to detect a true effect.

• Four factors– One or two tailed– Significance level– Sample size– Effect size

Increase power1. One-tailed2. Less stringent p level3. Greater n4. Bigger effect size

1, 2 Move rejection line to right

3 Distributions get tighter

4 Move distribution to left

Standard Errors

• Factors that increase standard errors– Small sample size– Multicolinearity

• Larger correlations among predictors

Regression CoefficientsY = X1 + X2

1X X Y X X YX

X X X X2b =

SS SP - SP SP

SS SS - ( SP )2 1 1 2 2

1 2 1 2

)SP(-SSSS

SPSP-SPSS=b 2

XXXX

YXXXYXX2

2121

12121

0 1 1 2b = Y - b X - b X2

Regression Coefficients Standard Errors

Regression Coefficients Standard Errors

Imprecision of predictionSum of Squares of predictor

Multicolinearity

Regression Coefficients Standard Errors

Imprecision of predictionSum of squares of predictorMulticolinearity

Appropriate Comparisons

• Conclusions require direct tests• Cannot conclude differences based on

different p levels• Example: Do groups differ on correlations?

– Group 1: r = .20 significant– Group 2: r = .15 nonsignificant– N = 100– Z-test = .3593: Nonsignificant– Differences of this size likely by chance

Comparing Correlation Magnitude

• Draw conclusions about size of relationship based on observed correlations

• Are correlations of that magnitude significant?

• Caution in multiple regression• Will results replicate?

Do Data Tell Consistent Story?An Example

• Cech, E., Rubineau, B., Silbey, S., & Seron, C. (2011). Professional role confidence and gendered persistence in engineering. American Sociological Review, 76, 641-666.

• Does role confidence lead to women’s failure to complete an engineering degree?

Prospective Survey Study

• Freshman year– Major– Confidence

• Expertise: developing skills in college• Professional: satisfaction with career

• Senior year– Completed major– Transferred to STEM vs. nonSTEM major

Author’s Conclusion

• “Instead, we find that professional role confidence is significantly associated with engineering persistence, and that its differential distribution between men and women contributes to gender segregation in engineering.”, p. 656

Their Basis for Conclusion

• Gender differences in confidence– Expertise: Female = 3.09, Male = 3.44– Career-fit: Female = 2.67, Male = 3.01– Both significant (t-test), – N: Female = 125, Male = 163

• Gender differences in completion– Female = 77.2%, Male = 82.5% (significant t-test)

• Expertise (not professional) confidence significant in regression– 11 control variables (including dummy coded college)– 6 predictors

Closer Look

• Correlations in appendix• Correlations with engineering completion

– Expertise confidence r = .006 (ns)– Career-fit confidence r = -.049 (ns)

• Confidence does not predict completion• Why significant in regression?

No Direct Gender Comparison

• Authors logic– Gender difference in confidence– Confidence relates to completion– Therefore confidence causes gender difference

• What do we need to draw this conclusion?

Where Do Students Go?

• Females more likely to switch within STEM– Female = 16.7% vs Male = 6.3% (Significant)

• Females more likely to graduate in STEM– Female = 93.9% vs. Male = 88.8%

Reasonable Conclusion

• Males more likely to stick with engineering• Women more likely to switch to other

STEM disciplines• Confidence unrelated to engineering

persistence• Not clear why men more likely to stick with

engineering

Approach To Data Analysis

• Clean dataset– Check inputting for accuracy– Check out of range observations

• Frequencies• Ranges

– Item analysis• Item-remainders• Coefficient alpha

– Means and correlations make sense• Compare to other studies

Analysis From Simple To Complex

• Descriptive statistics– Means, SDs, ranges, frequency distributions– Look for restriction of range– Does everything look reasonable?

• Simple relationships– Correlations– T-tests– One-way ANOVAs

• Study results at length

Complex Statistics

• Complex only after you have a feel for simple results

• Run complex sequentially with increasing complexity

• Compare complex with simple– Consistency in conclusions– Inconsistencies need to be explained

• Avoid assuming complex correct• Complex and simple test different things

• Goal is a coherent explanation

Cautions• Be aware of assumptions• Be aware of relative power of analyses

being compared• Don’t overlook methodological artifacts• Go beyond cookbook interpretations• Don’t let hypotheses/theories blind you• Avoid premature conclusions• Avoid pure data mining

– Type 1 error hunt

• Don’t be afraid to test alternatives

Week 13Literature Reviews

Narrative Review

• Summary of research findings• Qualitative analysis• “Expert” analysis• Based on evidence• Room for subjectivity• Classical approach

Meta-analysis

• Quantitative cumulation of findings• Based on common metric• Many approaches• Many decision rules• Room for subjectivity in decision rules

Meta-Analysis Hunter-Schmidt Approach

There are MANY ways to conduct meta-analysis

Use of Narrative Review• Used almost exclusively before 1990s• Psychological Bulletin• In depth literature summary• Brief overview vs. comprehensive

– Brief overview part of empirical articles• Can contrast very different studies

– Constructs– Designs– Measures

• Small number of studies

Limitations to Narratives

• One person’s subjective impression• Different reviews – different conclusions• Lacks decision rules for drawing

conclusions– What if half studies are significant?

• Difficulty with conflicting results• Narratives often hard to read• Narratives difficult to write

Narrative Review Procedure• Define domain• Decide scope (how comprehensive)• Inclusion rules• Identify/obtain studies• Read studies/take notes• Organize review

– Outline of topics– Assign studies to topic

• Write sections• Draw conclusions

Meta-Analysis

• NO AUTOMATIC INFERENCE MACHINE• Does NOT provide absolute truth• Does NOT provide population parameters

– Provides parameter estimates, i.e. statistics– Samples not always random or representative

• Has not revolutionized research• Is just another tool that you need

Just Another Tool

Use of Meta-Analysis• Dominant procedure today for reviews• Published in most journals• Can help settle inconsistencies in conclusions• Often descriptive and superficial• Allows for hypothesis tests

– Moderators• Requires highly similar studies

– Constructs– Designs– Measures

Settling an Inconsistency• Does HONS moderate job characteristics

Outcomes• Six reviews—inconsistent conclusions

– 2 no effect– 1 weak effect– 2 positive effect– 1 no conclusion

• Settle with meta-analysis

Spector, P. E. (1985). Higher-order need strength as a moderator of the job scope-employee outcome relationship: A meta-analysis. Journal of Occupational Psychology, 58, 119-127.

The Meta-Analysis

• Literature search– 20 studies– Correlations of core characteristics with outcomes– Compared high versus low HONS groups

• Results– 8 of 11 cases moderator effect found

• HONS moderator supported• Low power likely contributed to inconsistency

Limitations To Meta-Analysis• Small number of studies meeting criteria• Convenience sample of convenience samples• Subjectivity of decision rules

– Inclusion/exclusion rules– Statistics used– Procedures to gather studies

• Journals• Dissertations• Unpublished

• Different reviewers, different conclusions• Sometimes data are made up• Need lots of studies

Meta-Analysis Procedure

• Define domain and scope• Inclusion rules• Decide on M-A method

– Artifact adjustments?

• Identify/obtain studies• Code data from studies• Conduct analyses• Prepare tables• Write paper/interpret results

Define Domain• Choose topic• Specify domain

– Personality: Big Five vs. Individual traits• Define populations

– Employees vs. Students• Define settings

– Workplace vs. Home• Types of studies

– Group comparisons vs. correlations• Define variable operationalizations

– Self-reports vs. other reports

Apples Vs. Oranges

• Quantitative estimate of population parameter– What is the population?

• Mean effect size across samples• Assumes sample statistics assess same thing• Cumulating results across different constructs

not meaningful

Inclusion Rules• Operationalizations parallel forms

– Measures of NA, neuroticism, emotional stability, trait anxiety

– All trait measures• Samples from same population

– All full-time working adults– Full-time = > 30 hours/week– All American samples

• Designs equivalent– All cross-sectional self-report

• Journal published studies vs. others

Meta-Analysis Method• Many to choose from• Nature of studies

– Group comparisons– Correlations

• Rosenthal– Describe distribution of rs– Moderators as specific variables to test

• Hunter-Schmidt– Adjust for artifacts– Moderators as more variance than expected

Effect Size Estimates• Combine effect sizes

• Correlation as amount of shared variance

• Magnitude of mean differences

SE

MMd ControlTreatment

Where d is difference in means in SD units

Identify/Obtain Studies

• Electronic databases (PsycINFO)• Other reviews• Reference lists of papers• Conference programs/proceedings• Listservs• Write authors in area

Coding

• Choose variables to code• Judgments about inclusion rules• How to handle multiple statistics

– Independent samples– Dependent samples: Average

• Sometimes ratings made, e.g., quality– Interrater agreement

Variables To Code• Effect sizes• N• Reliability of measures• Name of measures• Sample description

– Demographics– Job types– Organization types– Country

• Design

Analysis

• Meta-analysis software• Statistical package

– Excel, SAS, SPSS

• Organize results– Tables by IV or DV

• Analysis of moderators

Interpret

• Often descriptive– Little insight other than mean correlations– Nothing new if results have been consistent

• Often superficial• Can test hypotheses• Effects of moderators• Can inconsistencies be resolved?• Suggest new directions or research gaps?

Rosenthal Approach

• Convert statistics to r– Chi square from 2x2 table– Independent group t-test– Two-level between group ANOVA

• Convert r to z• Compute descriptive statistics• Describe results in tables• Meta-analysis as summary of studies

Rosenthal Descriptives

• Mean effect size• Weighted mean• Median• Mode• Standard deviation• Confidence interval

Rosenthal Moderators

• Identify moderator and relate to effect sizes• Correlate characteristic of study with r• Shows if r is a function of moderator

Moderator Example• Satisfaction – turnover• Unemployment as moderator• Found studies• Contacted authors where/when conducted• Database of unemployment rates• Correlated unemployment to r of study• Unemployment –r with satisfaction-

turnover

Carsten-Spector 1987 Journal of Applied Psychology

Schmidt-Hunter• Convert effect sizes to r• Compute descriptive statistics on r• Collect artifact data

– Theoretical variability– Unreliability– Restriction of range– Quality of study

• Artifact distributions to estimate missing data• Adjust observed mean r to estimate rho• Compare observed SD to theoretical after adjustments• Residual variance = moderators

Estimating Missing Artifacts

• Estimate = Make up data• “The magic of statistics cannot create

information where none exists” Wainer• Existing data to guess what missing might

have been• Hall-Brannick JAP 2000 it is inaccurate• Science of what might be rather than what

is

Value of Artifact Adjustments

• Variability in r is what is/isn’t expected• Show that variance due to differential

reliability, restriction of range, etc.• Requires you have artifact data

Rosenthal Vs. H-S

• Both identify/code studies• Both compute descriptive statistics• r to z transformation

– Rosenthal yes, H-S no• H-S artifact adjustments• H-S rho vs. Rosenthal mean r• H-S advocate estimating unobservables• Rosenthal deals only with observables• Begin the same, H-S goes farther• Rosenthal similar to H-S bare bones

Why I Prefer Rosenthal

• Rho is parameter for undefined population– Convenience sample of convenience samples– Population = studies that were done/found

• Unavailable artifact data– Uncomfortable in estimating missing data

• Prefer to deal with observables• Don’t believe in automatic inference• Lot’s of competing methods

Week 14Ethics and Research Integrity

Appropriate Research Practices

• Conducting Research– Treatment of human subjects– Treatment of organizational subjects

• Data Analysis/Interpretation• Disseminating Results

– Publication

• Peer reviewing

Ethical Codes

• Appropriate moral behavior/practice• Accepted practices• Basic Principle: Do no harm• Protect dignity, health, rights, well-being• Codes

– APA– AOM

American Psychological Association Code

• Largely Practice oriented• Five principles

– Beneficence and Nonmaleficence [Do no harm]– Fidelity and Responsibility– Integrity– Justice– Respect for People’s Rights and Dignity

• Standards and practices• Applies to APA members• http://www.apa.org/ethics/

Preamble

Psychologists are committed to increasing scientific and professional knowledge of behavior and people's understanding of themselves and others and to the use of such knowledge to improve the condition of individuals, organizations, and society. Psychologists respect and protect civil and human rights and the central importance of freedom of inquiry and expression in research, teaching, and publication. They strive to help the public in developing informed judgments and choices concerning human behavior. In doing so, they perform many roles, such as researcher, educator, diagnostician, therapist, supervisor, consultant, administrator, social interventionist, and expert witness.

APA Conflict Between Profession and Ethical Principles

• Restriction of Advertising– Violation of the law

• Maximization of income for members• Tolerance of torture

– Convoluted statements

• Other associations manage to avoid such conflicts

Academy of Management Code

• Largely academically oriented• Three Principles

– Responsibility– Integrity– Respect for people’s rights and dignity

• Responsibility to– Students– Advancement of managerial knowledge– AOM and larger profession– Managers and practice of management– All people in the world

• http://www.aomonline.org/aom.asp?ID=&page_ID=239

Professional Principles

Our professional goals are to enhance the learning of students and colleagues and the effectiveness of organizations through our teaching, research, and practice of management.

AOM Vs. APA

• AOM– Consistent principles– Simpler– More directly relevant to organizational

practice and research– No attempt to compromise ethics for profit

Principles Vs. Practice

• Principles clear in theory• Ethical line not always clear• Ethical dilemmas

– Harm can be done no matter what is done– Conflicting interests between parties

• Employee versus organization• Whose rights take priority?

Example: Exploitive Relationships

• Principle– Psychologists do not exploit persons over whom they

have supervisory, evaluative, or other authority

• What does it mean to exploit?• Professor A hires Student B to be an RA

– How much pay/compensation is exploitive?– How many hours/week demanded?

• What if student gets publication?

Example: Assessing Performance

• In academic and supervisory relationships, psychologists establish a timely and specific process for providing feedback to students and supervisees.

• Not giving an evaluation is unethical?• How often?• How detailed?• What if honest feedback harms the person’s

job situation?

Conducting Research

• Privacy• Informed consent• Safety• Debriefing• Inducements

Privacy

• Anonymity: Best protection– Procedures to match data without identities

• Confidentiality– Security of identified data

• Locked computer/cabinet/lab• Encoding data• Code numbers cross-referenced to names

– Removing names and identifying information

Informed Consent

• Subject must know what is involved– Purpose– Disclosure of risk– Benefits of research

• Researcher/society• Subject

– Privacy/confidentiality• Who has access to data• Who has access to identity

– Right to withdraw– Consequences of withdrawal

Safety

• Minimize exposure to risk– Workplace safety study: Control group

• Physical and psychological risk

Debriefing

• Subject right to know• Educational experience for students• Written document• Presentation• Surveys: Provide contact for follow-up• Provide results in future upon request

Inducements

• Pure Volunteer – no inducement• Course requirement

– Is this coercion?

• Extra credit• Financial payment

– Is payment coercion?

Institutional Review Board: IRB

• Original Purpose: Protection of human subjects• Current Purpose: Protection of institution

– Federal government requirement• We pay for government atrocities of the past

– Government sanctions– Bureaucratic– Often absurd– Designed for invasive medical research

IRB Jurisdiction

• Institutions receiving federal research funds• All funded research under jurisdiction• Cross-country differences• Canada like US• China doesn’t exist

Types of Review• Full

– One year• Expedited: Research with limited risk

– Data from audio/video recordings– Research on individual or group characteristics or

behavior (including, but not limited to, research on perception, cognition, motivation, identity, language, communication, cultural beliefs or practices, and social behavior) or research employing survey, interview, oral history, focus group, program evaluation, human factors evaluation, or quality assurance methodologies.

– One year • Exempt

– Five years

Exempt• Project doesn’t get board review• Determined by staff member• You can’t determine own exemption• Five year• Surveys, interviews tests, observations• Unless• Subjects identified AND potential for harm

– Legal liability– Financial standing– Employability– Reputation

IRB Impact

• Best: Minor bureaucratic inconvenience– Protects institution– Protects investigator

• Worst: Chilling effect on research– Prevents certain projects– Ties up investigators for months

IRB: What Goes Wrong?

• Inadequate expertise– Lack of understanding of research– Apply medical model to social science

• Going beyond authority– Copyright issues

• Abuse of power• Refuge of the petty and small minded tyrant

Research Vs. Practice

• Research = Purpose not activity• Dissemination intent = research

– Presentation– Publication

• Class demo not research• Management project not research

– Consulting projects as research projects• Don’t ethics apply to class demos?

– Not IRB purview

Dealing with Organizations

• Who needs protection– Employee– Organization

• Who owns and can see the data?– Researcher– Organization

• What if organization won’t play by IRB rules?– IRB has no jurisdiction off campus

Anticipate Ethical Conflicts

• Avoid issues– Don’t know can’t tell

• Negotiate issues– Confidentiality– Nature of report– Ownership of data– Procedures

Integrity Issues: Analysis

• Honesty in research• Report what was done

– Why Hunter-Schmidt procedures aren’t unethical• Estimation procedures transparent

• Bad data practice– Fabrication– Deleting disconfirming cases: Trimming– Data mining: Type 1 Error hunt– Selective reporting: Only the significant findings

Dissemination

• Authorship credit• Referencing• Sharing Data• Editorial issues

– Editor– Reviewers

Author Credit• Authors: Substantive contributions

– What is substantive?– People vary in generosity

• Order of authorship– Order of contribution– Not by academic rank– Dissertation/thesis special case– Last for senior person

• Authorship agreed to up front• Potential for student/junior colleague exploitation

Slacker Coauthors

• When do you drop from coauthorship• Late• Not at all• Poor quality• Less than you expected

Submission• One journal at a time• One conference at a time• Can submit to conference and journal

– Prior to paper being in press• Almost all are electronic submission

– Can be difficult and tedious• Break paper into multiple documents• Enter each coauthor

• Most reviewing is blind– Only editor knows authors/reviewers

Journal Review• All 1st submissions are rejects

– Don’t want to see again– Revise and resubmit (R&R)

• Will consider revision if you insist (high risk)• Encourages resubmission

• Desk rejections: No review• Feedback from 1 to 4 reviewers (mode 2)• Feedback from editor• Multiple cycles of R&R can be required

– Can be rejected at any step• Tentative accept: Needs minor tweaks• Full accept: Congratulations!

Steps To Publication• Submit• R&R• Revise

– Include response to feedback

• Provisional acceptance– Minor revision

• Acceptance– Copyright release– Proofs

• In print– Entire process 1 year or more

R&R• More likely accepted than rejected

– Depends on editor– Good editor has few R&R rejects

• Work hard to incorporate feedback• Argue points of disagreement

– Additional analyses– Prior literature– Logical argument

• Don’t be argumentative– Choose your battles

• Give high priority

Author Role

• Make good faith effort to revise• Incorporate feedback• Be honest in what was done

– Don’t claim you tried things you didn’t

• Treat editor/reviewers with respect

Editor Role

• Be an impartial judge• Weight input from authors and reviewers• Be decisive• Keep commitments

– R&R is promise to publish if things fixed

• Treat everyone with respect

Reviewer Role• Objective review• No room for politics• Reveal biases to editor• Disclose ghost-reviewer to editor

– E.g., doctoral student– Pre-approval

• Private recommendation to editor• Feedback to author/s• Keep commitments• Treat author with respect

Reviewer As Ghostwriter

• Art Bedeian• Notes reviewers go too far

– Dictating question asked, hypotheses, analyses, interpretation

• Review inflation over the years– Sometimes feedback longer than papers

• Reviewers subjective• Poor inter-rater agreement• Abuse of power?

Reviewer Problems• Reviewers late• Reviewers nasty• Overly picky• Factually inaccurate• Overly dogmatic

– Favorite stats (CFA/SEM)– Edit out ideas they disagree with– Insist on own theoretical position– Assume there’s only one right way

• Not knowledgeable• Miss obvious• Careless

Scientific Progress Through Dispute

• Work is based on prior work– Testing theories– Integrating findings/theories

• Build a case for an argument or conclusion• Disseminate• Colleagues build case for alternative

– Scientific dispute• Two camps battle producing progress

– Dispute motivates work– Literature enriched

Crediting Sources• Must reference anything borrowed

– Cite findings/ideas– Quote direct passages– Little quoting done in psychology

• Stealing work– Plagiarism: not quoting quotes– Borrowing ideas

• Papers• People• Reviewed papers

– Easy to forget you didn’t have idea

Strategy For Successful Publication• Choose topic field likes

– Existing hot topic– Tomorrow’s hot topic (hard to predict)

• Conduct high quality study• Craft good story

– Make a strong case for conclusions– Theoretical arguments in introduction– Strong data to test

• Write clearly and concisely• Pay attention to current practice• Lead don’t follow

Dealing With Journals

• Be patient and persistent• Match paper to journal

– Journal interests– Quality of paper

• Count on extensive revision• Learn from rejection

– Consider feedback– Only fix things you agree with– Look for trends over reviewers

Fragmented Publication

• Multiple submission from same project• Discouraged in theory• Required in practice

– Single purpose– Tight focus

• Different purposes– Minimize overlap– Cross-cite– Disclose to editor

Example: CISMS

• Four major papers• Unreliability of Hofstede measure

– Applied Psychology: An International Review

• Universality of Work LOC and well-being– Academy of Management Journal

• Country level values and well-being– Journal of Organizational Behavior

• Work-family pressure and well-being– Personnel Psychology

How Much Overlap Is Too Much?

• A: Aquino, K., Grover, S. L., Bradfield, M., & Allen, D. G. (1999). The effects of negative affectivity, hierarchical status, and self-determination on workplace victimization, Academy of Management Journal, 42, 260-272.

• B: Aquino, K. (2000). Structural and individual determinants of workplace victimization: The effects of hierarchical status and conflict management style. Journal of Management, 26, 171-193.

Purpose from Abstract

• A: “Conditions under which employees are likely to become targets of coworkers’ aggressive actions”

• B: “…when employees are more likely to perceive themselves as targets of co-workers’ aggressive actions”

Procedure• A:“Two surveys were administered to employees

of a public utility as part of an organizational assessment. Although the surveys differed in content, both versions contained an identical set of items measuring workplace victimization”

• B: “Two different surveys were administered to employees of a public utility as part of an organizational assessment. Although the surveys differed in content, both versions contained an identical set of items measuring workplace victimization.”

Sample/Measures• A: n = 371, 76% response, mean age 40.7, tenure

11.5, 65% male, 72% African American• B: n = 369, 76% response, mean age 40.7, tenure

11.5, 65% male, 72% African American

• A: PANAS, Hierarchical status (Haleblian), self-determination, Victimization (14 items)

• B: Rahim Organizational Conflict Inventory-II, Hierarchical status (Haleblian), Victimization (14 items)

Factor Analysis/Table 1• A: Exploratory FA of Victimization, CFA of 8

items on holdout sample• B: Exploratory FA of Victimization, CFA of 8

items on holdout sample

• A: “Factor loadings and lamdas for victimization itemsa” [Note misspelling of lambda]

• B: “Factor loadings and lamdas for victimization items1” [Note misspelling of lambda]

Table 3/Hypothesis Tests

• A: “Results of hierarchical regression analysis”• B: “Results of hierarchical regression analysis”

• A: “Two regression equations were fitted: one predicting direct victimization and the other predicting indirect victimization.”

• B: “Two regression equations were fitted: one predicting direct victimization and the other predicting indirect victimization.”

Limitations• A: “This study has several limitations that deserve

comment. Perhaps the most serious is its cross-sectional research design. The victim precipitation model is based on the assumption that victims either intentionally or unintentionally instigate some negative acts.”

• B: “This study has several limitations that deserve comment. Perhaps the most serious is its cross-sectional research design. The victim precipitation model is based on the assumption that victims either intentionally or unintentionally instigate some negative acts…”

Research Support: Grants

• Funding needed for many studies– Expands what can be done

• Some research very cheap– Shoestring because lack of funding?– Lack of funding because research is cheap?

• Universities encourage grants– Diminishing state support

Grant Pros• Tangible

– Covers direct costs of research• Equipment/supplies• Subject fees/inducements• Human resources (research assistants)

– Rewards to investigator• Summer salary• Course buyout• Conference travel

– Support students• Intangible

– Forces you to plan study in detail– Prestige– Administrative admiration (rewards)

Grant Cons• Tough to get: Competitive

– Time consuming• Requires resubmission with long cycle time• Administrative burden• Takes time from teaching/research• Can redirect research focus

– Not always a bad thing• Confuse path with goal

– Grant is not a research contribution

Sources• Federal (Highest status—indirects to university)• Foundations• Internal University Grants

– Small– Not as competitive

• New faculty– New investigator and small grants

• Doctoral students– ERC pilot grants– SIOP grants– Dissertation grants

Federal Grants Challenging• High rejection rate• Takes multiple submissions (R&R)• Must link to priorities—not everything

fundable

Grant Strategy

• Develop grant writing skill• Tie to fundable

– Workplace health and safety• Musculoskeletal Disorders (MSD) and ….• Workplace violence

• Intervention research in demand• Interdisciplinary• Use of consultants• Pilot studies• Programmatic and strategic

Week 15Wrap-Up

Successful Research Career• Conducting good research

– Lead don’t follow• Visibility

– Good journals– Conferences– Other outlets– Quantity

• First authored publications– Important more early in career

• Impact• Grants

Programmatic

• Program of research– More conclusive– Multiple tests– Boundary conditions– More impact through visibility– Helps getting jobs– Helps with tenure/promotion– Can have more than one focus

Conducting Successful Research

• Develop an interesting question– Based on theory– Based on literature– Based on observation– Based on organization need

• Link question to literature– Theoretical perspective– Place in context of what’s been done– Multiple types of evidence– Consider other disciplines

Conducting Successful Research 2

• Design one or more research strategies– Lab vs. field– Data collection technique

• Survey, interview, observation, etc.

– Design• Experimental, quasi-experimental or observational• Cross-sectional or longitudinal• Single-source or multisource

– Instrumentation• Existing or ad hoc

Conducting Successful Research 3

• Analysis• Hierarchy of methods simple to complex

– Descriptives– Bi-variable relationships– Test for controls– Complex relationships

• Multiple regression• Factor analysis• HLM• SEM

Conducting Successful Research 4

• Conclusions– What’s reasonable based on data– Alternative explanations– Speculation– Theoretical development– Suggestions for future

KSAOs Needed

• Content knowledge• Methods expertise• Writing skill• Presentation skill• Creativity• Thick skin

Pipeline• Body of work at various stages

– In press– Under review– Writing/revising– In progress– Planning

• Set priorities– Don’t let revisions sit– Get work under review– Always work on next project

• Collaboration to multiply productivity• Time management

Authorship Order

• First takes the lead on paper– Most of writing– Most input in project

• Important early in career to be first• Balance quantity with order• Sometimes most senior person is last

– PI on project– Senior member of lab

Impact

• Effect of work on field/world• Citations

– Sources• ISI Thomson• Harzing’s Publish or Perish• Others

– Self-citation– Citation studies

• Individuals (e.g., Podsakoff et al. Journal of Management 2008)• Programs (e.g., Oliver et al. TIP, 2005)

• Being attacked

Partnering

• State/local government• Corporations• Tying research to consulting• Partnership with practitioner

– In kind

Grants

• Expands what you can do• Good for career

– Current employer– Potential future employers

Grantsmanship• Develop grant writing skill

– Start as a student• Small grant at first• Proposal somewhat different from article

– Background that establishes need for study– Demonstrates ability to conduct

• Expertise of team• Letters of agreement/support

– High likelihood of success• Pilot data important• Low risk

• Address funding agency priority

Final Advice• Be a leader not a follower

– Address problem that is not being addressed– Find creative ways of doing things

• Be evolutionary not revolutionary– Too different unlikely to be accepted– Most creative often in lesser journals

• Follow-up studies in better journals

• Critical mass– Need multiple publications on topic to be noticed– Programmatic

• Build on the past, don’t tear it down– Positive rather than negative citation

Final Advice cont.

• Be flexible in thinking– Don’t get prematurely locked into

• Conclusion, Idea, Method, Theory

• Use theory inductively– A good theory explains findings

• Don’t take yourself too seriously• Have a thick skin• Enjoy your work

top related