crjs 4466 program & policy evaluation lecture #3 evaluation projects resume preparation job...
TRANSCRIPT
CRJS 4466PROGRAM & POLICY EVALUATION
LECTURE #3
• Evaluation projects
•Resume preparation
•Job hunting
• Questions?
• In-class test #1 – next week!
16. Targets:
• be clear as to the appropriate ‘units of analysis’ - beware the ‘ecological fallacy’
• targets are the objects of a program intervention
• targets can be individuals, groups, organizations, political areas, physical units…..almost anything!
• operationalization of the target definition is an important step in the program design
• targets can be direct (e.g. students) or indirect (e.g. ‘broken windows’ theory)
• the problem of specifying the location and boundaries of program targets (e.g. ‘ADD children)
• the need for clear operational rules specifying who/what is or is not a target (e.g. ‘sexual offenders’ or ‘contingent worker’)
• target measures must be both inclusive and mutually exclusive
17. Key Concepts
• incidence - the number of new cases identified during a specified period in a specified area (e.g. annual incidence of prostate cancer in Canada)
• prevalence - the number of existing cases in a specified area during a specified time (e.g. the number of illiterate people residing in North Bay during 2000)
• population at risk - the segment of the population that is subject to developing a given condition
- can be defined probabilistically (e.g. health screening programs)
• sensitivity - the likelihood of including the correct targets in the program (‘true positives’)
• specificity - the likelihood of excluding targets who do not have the condition (‘false positives’)
• need - a population of targets who currently manifest the condition that requires attention
• demand - the population of targets who are able or willing to participate in the program
• rates - the proportion of the manifesting a condition- use of age/sex specific rates
18. Program Logic Models as a diagnostic tool:
• what are the general goals of the program• what are the graduated steps (objectives) that must be accomplished to reach the goal – how are these specified?• what activities are performed as part of the program• what is the process through which resources, activities are converted to outcomes
Program Goal
Objective 1 Objective 2 Objective 3
Activity 1 Activity 2Activity 3 Activity 4Activity 5 Activity 6
I1 I2 I3 I4 I5 I6 I7 I8 I11 I12 I9 I10
• program logic models outline the ideal model of the program’s operation – they are like causal models of how certain causal factors ‘X’ (inputs, activities) are presumed to lead to certain causal effects ‘Y’ (outputs)
• the program logic modeling exercise can serve to identify blockages, inefficiencies in program functioning
• the program logic model is a descriptive, diagnostic tool
• “if you don’t know how the program operates, how can you tell if you are doing things the right way”
• can form the basis for development of a performance measurement system
Key concepts in program logic modeling
• program inputs
• program components (activities)
• implementation objectives
• program outputs
• linking constructs
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Table 2-2: A Framework for Modeling Program Logics
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Figure 2-1: Income Self-Sufficiency: Logic Model
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Figure 2-2: Logic Model for the Alcohol and Drug Services Program
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Figure 2-3: Flow Chart for Fire Codes Inspection Program
Program Technologies
• combination of knowledge, technique and experience an organization has available to accomplish objectives and goals
• what are the practices, the ‘best practices’ in use (technologies) that effect desired changes?
• note: in some areas (e.g. engineering) perfect technologies work perfectly every time: but in other areas (e.g. social problems, crime) even perfect technologies will not work every time
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Table 2-3: Program Technologies and the Probability that Outcomes Will be Achieved
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Table 2-1: Program Logic Model of Laurel House
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Table 2-4: Examples of Factors in the Environments of Programs that Can Offer Opportunities and Constraints to the
Success of Programs
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Logic Model for Nova Scotia COMPASS Program
Research Designs in Evaluation
• use of both quantitative and qualitative research designs
• sometimes, though rarely now, it is possible to use of true experimental design to assess whether a program had a true effect or ‘impact’ in changing behaviour
• more typically, use of quasi-experimental research designs (comparison groups) and correlational (no comparison) designs coupled with qualitative methods
• strongest methodological approach to assessing impact of a program is the use of the randomized experimental model
Exp - R 0 X 0
Con - R 0 0
• note variations – pre.post, and post-test only designs, also multifactorial designs
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Table 3-1: Two Experimental Designs
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Table 3-2: Research Design for the Elmira Nurse Program
Home Visitation Pr
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Figure 3-5: The Four Kinds of Validity in Research Designs
Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn.
Figure 3-9: Implementation and Withdrawal of Neighbourhood Watch and Team Policing
• the three criteria of causality – and the experimental method:
1. correlation2. temporal asymmetry3. non-spuriousness
• note the difficulty in demonstrating that a program intervention is the “cause” of a specific outcome
- the issue of causation versus correlation- bias in selection of targets- “history”- intervention (Hawthorne) effects- poor measurement
• Campbell versus Cronbach: perfect versus good enough evaluation assessments – and the issue of the validity of the research design in use
• gross versus net outcomes
Gross = Effects of + Effects of + Designoutcome intervention other processes Effects
(net effect) (extraneous factors)
Establishing validity of a research design:
• statistical conclusion validity• internal validity of the design
- history- maturation- testing- instrumentation- statistical regression- selection- mortality - ambiguous temporal sequence- selection-based interactions
Quasi-experimental research designs:
• Comparison group design
Exp - Comp 0 X 0
Con - Comp 0 0
• note: no randomization of subjects takes place – comparison groups constructed by matching
Quasi-experimental research designs:
• before – after designs 0 X 0
• single time series designs 000000 X 000000
• comparative time series designs 000000 X 000000 000000 000000
• case study designs X 0
Construct validity:
• the fit between ‘measurement’ and ‘reality’
• the operationalization process as the key link to construct validity
• diffusion of treatments
• compensatory equalization of treatments
• compensatory rivalry
• resentful demoralization
• Hawthorne effect