1 types of evaluation. 2 different types of evaluation formative evaluations examine the delivery of...

14
1 Types of Evaluation

Upload: calvin-thomas

Post on 22-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

1

Types of Evaluation

2

Different types of evaluation

• Formative evaluations examine the delivery of the program or technology, the quality of its implementation, and the assessment of the organizational context, personnel, procedures, inputs, etc

a. Needs assessmentb. Process evaluation

• Summative Evaluations describe what happens subsequent to delivery of the program or technology; assessing whether the object can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the object.

c. Impact evaluationd. Cost-benefit analysis

3

Needs Assessment Should Provide

• Clear sense of target population– Students who are not responding to other inputs– Students who are falling behind

• Clear sense of need program will fill– What are teachers lacking?– How to deliver? How much? What are potential barriers?

• Clear articulation of program benefits – Is a wrong being righted? Is a right being expanded?

• Clear sense of alternatives– Is this the most effective, efficient, cost-effective method of

meeting teacher/student needs?

Tools – focus group discussions, structuredunstructured surveys

4

Process evaluation

• Are the services being delivered?– Money is being spent– Textbooks are reaching the classroom, being used

• Can same service be delivered at lower cost?– Substituting expensive inputs with cheaper ones

• Are the services reaching the right population?– Are the books reaching students? Which students?

• Are the clients satisfied with service?– Teachers’, students’ response to teaching method

Tools/resources – administrative data, surveys, groupdiscussions

5

Impact evaluation

• The program happened, how did it change lives?

• What does ToC say we might expect to change?– Intermediate indicators– Final outcomes

• Primary: did textbooks cause children to learn more

• Secondary: – Distributional questions: who learned more?– If several treatments: what was the best program

design?

6

How impact differs from process?

• When we answer a process question, we need to describe what happened. – This can be done from reading documents,

interviewing people, admin records, etc.

• When we answer an impact question, we need to compare what happened to what would have happened without the program– There are various ways to get at this, but all of

them have in common that they need to re-create what did not happen.

Impact Evaluation Techniques

• Experimental Evaluation – Assignment of treatment is random

• Quasi-Experimental – There are multiple waves of data or multiple groups available. But the treatment assignment is not random

• Non-Experimental – Only a single snapshot measurement avaialble

7

8

Evaluation and cost-benefit analysis

• Needs assessment gives you the metric for defining the cost/benefit ratio

• Process evaluation gives you the costs of all the inputs

• Impact evaluation gives you the quantified benefits

• Identifying alternatives allows for comparative cost benefit

9

Example: Comparative cost benefit

In comparison to other programs, deworming has been found to be the most cost-effective

10

Linking back to objectives for evaluation• Accountability

- Did we do what we said we were going to do?- Process evaluation determines whether books delivered and used

- Did we have a positive impact on people’s lives?- Impact evaluation of link between books and test scores

• Lesson learning– Particular programs do or do not work

• Impact evaluations of similar programs in different situations– What is the most effective route to achieve a certain outcome?

• Cost benefit analysis comparing several programs– Similarities in strategies that are successful, for example, in

changing behavior, even across fields?• Linking results back to theory

• Reduced poverty through more effective programs– Future decisions based on lessons learned

• Solid reliable impact evaluations are the building blocks for more general lesson learning

Things to be very clear about

a. Validity – internal and externalb. Study Ethics

11

a. Internal Validity

•How well the study was run (research design, operational definitions used, how variables were measured, what was/wasn't measured, etc.), and

•In case of an impact evaluation how confidently can one conclude that the change in the dependent variable was produced solely by the independent variable and not extraneous ones

12

a. External Validity

• The extent to which a study's results (regardless of whether the study is descriptive or experimental) can be generalized/applied to other people or settings reflects its external validity.

• Typically, group research employing randomization/randomized-selection will initially possess higher external validity than will studies (e.g., case studies and single-subject experimental research) that do not use random selection/assignment.

13

b. Ethical Principles

• Voluntary participation• Informed consent• Risk of harm• Confidentiality• Institutional Review Boards – Enforces

established procedures that researchers will consider all relevant ethical issues in formulating research plans.

14