evidence battles in evaluation: how can we do better?

21
Evidence Battles in Evaluation: How can we do better? Mel Mark Penn State University

Upload: jalen

Post on 12-Jan-2016

28 views

Category:

Documents


0 download

DESCRIPTION

Evidence Battles in Evaluation: How can we do better?. Mel Mark Penn State University. DES Conference 2013 Three tracks:. Evaluation as a force for change New & old roads in impact evaluation Evaluation as a forward-looking perspective. This talk in relation to the three conference tracks:. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Evidence Battles in Evaluation: How can we do better?

Evidence Battles in Evaluation:How can we do better?

Mel MarkPenn State University

Page 2: Evidence Battles in Evaluation: How can we do better?

DES Conference 2013Three tracks:

1. Evaluation as a force for change2. New & old roads in impact evaluation3. Evaluation as a forward-looking perspective

Page 3: Evidence Battles in Evaluation: How can we do better?

This talk in relation to thethree conference tracks:

1. Evaluation as a force for change

2.New & old roads in impact evaluation

3. Evaluation as a forward-looking perspective

Page 4: Evidence Battles in Evaluation: How can we do better?

First, apologies

• Language• Examples• “Humor”• And more

Page 5: Evidence Battles in Evaluation: How can we do better?

Partial and selective history of evidence (RCT) battles

• Earlier, the paradigm wars• RCTs: quick overview• Use of RCTs varied by field• In US, Department of Education• International development evaluation• Pushback, debate, association statements

Page 6: Evidence Battles in Evaluation: How can we do better?

The debate• Generally unproductive, e.g.– Talking past each other– Critiques, not even-handed

• On the surface, about methods• Instead, probably about other issues, e.g.– Role of impact evaluation– Relative merits of RCTs for impact evaluation– More on these to come

• Aside: “Strange bedfellows”

Page 7: Evidence Battles in Evaluation: How can we do better?

Instead of method debate, consider ‘deeper’ issues

Page 8: Evidence Battles in Evaluation: How can we do better?

Should an impact evaluation be done?

• For early figures, e.g. Campbell• Assumes “fork in the road”• But other purposes of evaluation exist:

Page 9: Evidence Battles in Evaluation: How can we do better?
Page 10: Evidence Battles in Evaluation: How can we do better?

Many evaluation theories, emphasizing different evaluation purposes, e.g.

• Impact evaluations for selection from among options• Info needs of program managers; program improvement• Social justice• Empowerment of individuals• Creating forum for democratic deliberation• Development of learning organizations• Ongoing construction of an initiative• And on and on

• Aside: Knowledge of associated theories as part of content knowledge of an evaluator

Page 11: Evidence Battles in Evaluation: How can we do better?

Beyond the many evaluation theories, multiple questions for evaluation, e.g.• Feasibility of implementing a new program type• Quality of implementation• Compliance with regulations, e.g. about client eligibility• Cost• Client compliance, retention, perceptions• Ability to scale up

• Question: Is impact the right question, for a given evaluation?

Page 12: Evidence Battles in Evaluation: How can we do better?

RCT advocates vs critics: Each side’s view of the role of impact evaluation

• Guess.

• Aside: Advocacy of RCTs, and ‘gold standard’ language, may be an effort to make impact evaluation more salient among policy makers, evaluation funders?

Page 13: Evidence Battles in Evaluation: How can we do better?

IF impact is right question, is RCT useful relative to other methods?

• Needed?• Practical?• Ethical?• Overkill?

• Compared to alternative methods,• And with what method ancillaries for other

questions?

Page 14: Evidence Battles in Evaluation: How can we do better?

Alternative methods

• Long list (including regression-discontinuity, time series, various quasi-experiments, comparative case studies, participant statements, ….)

• Circumstances may favor or prohibit alternative methods

Page 15: Evidence Battles in Evaluation: How can we do better?

RCT advocates vs critics: Each side’s view of RCT’s comparative advantage

• Guess

Page 16: Evidence Battles in Evaluation: How can we do better?

Issues of trade-offs: Estimating effects vs generalizing

Page 17: Evidence Battles in Evaluation: How can we do better?

Where to now? 1

• Regarding debates (this and future)– Try to find deeper sources of disagreement• E.g., role of impact evaluation; whether RCTs are

generally preferable for impact evaluation

– Try to understand other’s assumptions, try not to talk past each other

– Even-handed assessments of one’s preferred and not preferred options

– Less heat, more light

Page 18: Evidence Battles in Evaluation: How can we do better?

Where to now? 2

• Evidence hierarchy, not ideal• Evidence typology, or contingency tree, an

option, but may:– Ignore specifics– Be cumbersome, or incomplete, or both– Stifle innovation– Ignore quality of information needed

• May still suggest better vs worse options

Page 19: Evidence Battles in Evaluation: How can we do better?

An alternative:

• Informed process for selecting evaluation method (given evaluation question, context, etc).

• Leads to questions, e.g., – Evaluation policy that describes– The location, organization, independence of

evaluation unit– Advisory and/or review processes

• “Frameworks as an aid…”

Page 20: Evidence Battles in Evaluation: How can we do better?

And keep in mind

• The ‘guiding star’ is not method choice per se

• It’s the potential for evaluation to make a difference, to have positive consequences, to contribute to social betterment– Think of evaluation as an intervention– Consider the equivalent of “program theory”

Page 21: Evidence Battles in Evaluation: How can we do better?

Q&A.

Closing