m&e workshop handout manual

136
CARE International DM&E Workshop Series – 1997 V V O O L L U U M M E E 1 1 : : H H A A N N D D O O U U T T M M A A N N U U A A L L By Richard Caldwell and Sofia Sprechmann, With additions from Jim Rugh

Upload: dejan-seslija

Post on 07-Dec-2015

26 views

Category:

Documents


2 download

DESCRIPTION

ME

TRANSCRIPT

Page 1: M&E Workshop Handout Manual

CARE International DM&E Workshop Series – 1997

VVOOLLUUMMEE 11:: HHAANNDDOOUUTT MMAANNUUAALL

By Richard Caldwell and Sofia Sprechmann, With additions from Jim Rugh

Page 2: M&E Workshop Handout Manual

TABLE OF CONTENTS TABLE OF CONTENTS........................................................................................................................... i Handout 1.1: WORKSHOP INTRODUCTION ........................................................................................1 Handout 1.2: WORKSHOP OVERVIEW AND SCHEDULE ...................................................................3 Handout 2.1: MONITORING ...................................................................................................................6 Handout 2.2: EVALUATION....................................................................................................................7 Handout 2.3: INDICATORS 1 .................................................................................................................8 Handout 2.4: INDICATORS 2 .................................................................................................................9 Handout 2.5: PROJECT HIERARCHY ................................................................................................10 Handout 2.6: GLOSSARY....................................................................................................................16 Handout 2.7: WHO NEEDS INFORMATION ABOUT PROJECTS? ...................................................21 Handout 2.8: EXERCISE .....................................................................................................................24 Handout 3.1: PROBLEM ANALYSIS HIERARCHY.............................................................................25 Handout 3.2: CHILD MALNUTRITION PROBLEM ANALYSIS ............................................................29 Handout 3.3: USES FOR PROBLEM ANALYSIS.................................................................................30 Handout 3.4: CAUSE AND EFFECT LOGIC IN A TYPICAL PROJECT .............................................31 Handout 3.5: LOGFRAME OF A TYPICAL CARE PROJECT.............................................................32 Handout 3.6: WHAT ARE INDICATORS AND HOW ARE THEY USED ............................................36 Handout 3.7: INDICATORS .................................................................................................................39 Handout 3.8: SMALL GROUP EXERCISE ..........................................................................................42 Handout 3.9: THREE CATEGORIES OF DATA..................................................................................44 Handout 3.10: HOW TO DETERMINE WHAT DATA TO COLLECT ..................................................45 Handout 3.11: INDICATORS AND DATA WORKSHEET....................................................................46 Handout 3.12: DEVELOPING M&E MATRICES .................................................................................47 Handout 4.1: RESEARCH DESIGN FOR EVALUATION....................................................................49 Handout 4.2: EVALUATION RESEARCH DESIGNS ..........................................................................50 Handout 4.3: EVALUATION RESEARCH DESIGN ISSUES ..............................................................51 Handout 4.4: EXERCISE - EVALUATION DESIGN ............................................................................52 Handout 5.1: QUANTITATIVE DATA AND METHODS.......................................................................53 Handout 5.2: QUALITATIVE AND QUANTITATIVE METHODS..........................................................54 Handout 5.3: CONDUCTING QUANTITATIVE BASELINE AND EVALUATION SURVEYS ..............55 Handout 5.4: SURVEY PLANNING & DESIGN...................................................................................56 Handout 5.5: SAMPLING.....................................................................................................................58 Handout 5.6: TARGET POPULATION.................................................................................................59 Handout 5.7: SAMPLING FRAME .......................................................................................................60 Handout 5.8: SAMPLING METHODS..................................................................................................62 Handout 5.9: SAMPLE SIZE................................................................................................................68 Handout 5.10: EXERCISE - TARGET POPULATION AND SAMPLING FRAME...............................72 Hanout 5.11: QUESTIONNAIRE DESIGN...........................................................................................73 Handout 5.12: TYPE OF QUESTIONS................................................................................................75 Handout 5.13: QUESTIONNAIRE LAYOUT, LENGTH AND CODING ...............................................77 Handout 5.14: EXAMPLE - QUESTIONNAIRE WITH SEVERAL TYPES OF QUESTIONS ..............79 Handout 5.15: TWELVE GUIDELINES FOR DEVELOPING...............................................................81 Handout 5.16: DATA COLLECTION - TRAINING AND FIELDWORK ...............................................82 Handout 5.17: QUANTITATIVE DATA ANALYSIS..............................................................................87 Handout 5.18: TOOLS FOR QUANTITATIVE DATA ANALYSIS........................................................88 Handout 5.19: GRAPHING DATA........................................................................................................91 Handout 6.1: QUALITATIVE DATA .....................................................................................................93 Handout 6.2: WHEN TO USE QUALITATIVE METHODS ..................................................................94 Handout 6.3: THE CASE STUDY ........................................................................................................96 Handout 6.4: QUALITATIVE EVALUATION CHECKLIST...................................................................97 Handout 6.5: NATURE OF QUALITATIVE DATA ...............................................................................98 Handout 6.6: QUALITATIVE DATA COLLECTION .............................................................................99 Handout 6.7: OBSERVATION ...........................................................................................................105 Handout 6.8: CHOOSING A SAMPLE:..............................................................................................107 Handout 6.9: QUALITATIVE DATA ANALYSIS AND INTERPRETATION .......................................108 Handout 6.10: FIELD RESEARCH FOR QUALITATIVE STUDIES ..................................................111 Handout 6.11: CODING AND ANALYSIS..........................................................................................112 Handout 6.12: QUALITATIVE DATA ANALYSIS EXERCISE ...........................................................115

CARE 1997 M&E Workshop Series Caldwell and Sprechmann i

Page 3: M&E Workshop Handout Manual

Handout 6.13: HEALTH PROJECT CASE STUDY ...........................................................................118 Handout 6.14: DATA ANALYSIS .......................................................................................................121 Handout 7.1: PARTICIPATORY MONITORING AND EVALUATION ...............................................122 Handout 7.2: PARTICIPATORY M&E 2..............................................................................................123 Handout 8.1: DISSEMINATION AND UTILIZATION .........................................................................124 Handout 8.2: POSSIBLE DISSEMINATION STRATEGIES ..............................................................125 Supplemental Handout 1: PARTICIPATORY METHODS ..................................................................126 Supplemental handout 2: RATE EXERCISE ......................................................................................129 Supplemental handout 3: DEVELOPING EFFECTIVE EVALUATION PLANS.................................131

CARE 1997 M&E Workshop Series Caldwell and Sprechmann ii

Page 4: M&E Workshop Handout Manual

Handout 1.1

Handout 1.1: WORKSHOP INTRODUCTION INTRODUCTION Welcome to the CARE Monitoring and Evaluation Workshop. This workshop has been designed to enhance your knowledge and skills related to the monitoring and evaluation of CARE’s development assistance projects. Every CARE office is challenged with assessing the extent to which it effectively implements its projects. This includes day-to-day accountability of goods and services the project proposes to deliver as well as overall accountability of achieving expressed intermediate and final goals. Monitoring and evaluation are the two processes that we use to assess whether or not intended goods and services as well as project benefits are being realized. This workshop is supported by CARE USA and is intended to develop the capacity of Country Office staff in monitoring and evaluation. The challenge, however, is in designing and conducting a workshop that meets both the needs of individuals and the needs of Country Offices. Since not all individuals and Country Offices come to this training with the same knowledge and work experience in monitoring and evaluation, it is expected that there will be differences in what each person receives from this training. We sincerely hope that this 5-day training and the accompanying handouts increases your understanding of the monitoring and evaluation process enables you to improve the M&E of CARE projects and motivates you to want to learn more and more about monitoring and evaluation. Richard Caldwell Sofia Sprechmann

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 1

Page 5: M&E Workshop Handout Manual

Handout 1.1

WORKSHOP OBJECTIVES To gain a working knowledge of the fundamental concepts, definitions, and steps

involved in CARE monitoring and evaluation.

To acquire new skills in quantitative and qualitative methods related to monitoring and evaluation.

To develop monitoring and evaluation frameworks based on project design logic.

To explore specific issues and discuss possible alternatives for a country office

strategy for improving monitoring and evaluation.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 2

Page 6: M&E Workshop Handout Manual

Handout 1.2

Handout 1.2: WORKSHOP OVERVIEW AND SCHEDULE Goal: By the end of the workshop, participants will better understand the CARE monitoring and evaluation systems and have acquired new basic skills for effective monitoring and evaluation. Day

Sessions

Purpose

Time

1 1. Workshop Objectives and

Schedule 2. M&E Conceptual Framework 3. From Logframes to M&E

Systems

• To introduce participants to the workshop format and schedule and to develop a set of group objectives.

• To discuss the purposes of and CARE's approach for monitoring and

evaluation. • To introduce key concepts and terms used in M&E. • To explore and establish the linkages between project design and

monitoring and evaluation, paying particular attention to the outputs of the design process that are incorporated into the M&E system .

• To develop an M&E framework that can be used as a base for incorporating M&E in project implementation.

1 Hour 2 Hours 5 Hours ** (1 hour will be on Tuesday morning)

2 4. Research Design for Evaluation 5. Quantitative Methods

• To overview evaluation research designs and discuss advantages and disadvantages of using quasi-experimental evaluation designs.

• To explore the characteristics and use of quantitative data. • To review the steps involved in conducting baseline and evaluation

surveys, review sampling design, size and procedures, explore the different types of questions used for structured/semi-structured interviews, understand the issues involved in designing good questionnaires, review the tasks involved in data collection and understand the most commonly used techniques for quantitative data analysis.

2 Hours 4 Hours

3 5. Quantitative Methods

Continued

7 Hours

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 3

Page 7: M&E Workshop Handout Manual

Handout 1.2

Day

Sessions

Purpose

Time

4 6. Qualitative Methods

7. Participatory Techniques in M&E

• To explore the characteristics and use of qualitative data. • To review the major methods in collecting qualitative data, understand

the issues involved in designing good discussion and interview guides, review field tasks and understand the general approaches to qualitative data analysis and some common features of analytic methods.

• To review participatory techniques used for M&E and discuss the

benefits of involving stakeholders in the design of M&E frameworks, data collection and analysis.

6 Hours 1 Hour

5 8. Information Dissemination and 9. Country Office Strategies for

Monitoring and Evaluation 10. Workshop Evaluation and

Testing

• To explore ways of presenting information to decision-makers,

understand the importance of disseminating and using information, and learn the different components of an action plan.

An open forum discussing current difficulties in M&E and exploring strategies for improving M&E. Participants will be provided an opportunity to express opinions about the workshop and will take a short test to assess knowledge and skills development.

1 Hour 2 Hours 1 Hour

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 4

Page 8: M&E Workshop Handout Manual

Handout 1.2

M&E Workshop Handouts 1. Workshop Objectives and Schedule 2. M&E Conceptual Framework 3. From Logframes to M&E Systems 4. Research Design for Evaluation 5. Quantitative Data and Methods 6. Qualitative Data and Methods 7. Participatory Techniques in M&E 8. Developing Data Collection Instruments 9. Field Tasks and Data Collection 10. Information, Dissemination and Utilization

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 5

Page 9: M&E Workshop Handout Manual

Handout 2.1

Handout 2.1: MONITORING What? Monitoring is the process of routinely gathering information on all aspects of the project. Monitoring provides managers with information needed to: • Analyze the current situation • Identify problems and find solutions • Discover trends and patterns • Keep project activities on schedule • Measure progress towards intermediate goals and formulate/revise actions necessary to

achieve these goals • Make decisions about human, financial and material resources Monitoring is continuous. A monitoring system should be in place before project start-up. Monitoring activities should be scheduled on the project's workplan. Monitoring can be carried out through field visits, review of service delivery and commodities records, and management information systems. Monitoring reports should be timely, simple, concise and useful. They can be alerting using yellow flags (an activity in danger of not being carried out as planned) or red flag (an action that has not been completed as anticipated). They should include results, if achieved, actions necessary to correct any deficiencies, and an activity schedule and reporting timely for those actions necessary to correct an activity. Who? The first level of monitoring is done by project staff. Supervisors are responsible for monitoring the staff and tasks under them, and the project manager is responsible for monitoring all aspects of the project. The second level of monitoring is done by donor(s). Through field visits and routine reports from the project manager, the donor monitors progress and measures performance. Why? Monitoring provides managers with information needed to analyze the current project situation, identify problems and find solutions, discover trends and patterns, keep projects on schedule, and measure progress towards expected outcomes. It allows the project team to formulate or revise future goals, make decisions regarding human, financial and material resources, and minimize needless project costs. How? Monitoring takes the form of MIS, including quarterly and other reports. The use of monitoring information includes the output of the MIS and the review of reports. Monitoring is routinely carried out through field visits, review of service delivery and commodities records, input from management information systems, and from review of project quarterly reports and other reporting documents.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 6

Page 10: M&E Workshop Handout Manual

Handout 2.2

Handout 2.2: EVALUATION What is evaluation? Evaluation is the process of gathering and analyzing information to determine: 1. whether the project is generating its planned outcomes; and 2. the extent to which the project is achieving its stated goals through these activities. How does evaluation differ from monitoring? • Timing • Focus • Level of detail What is the purpose of evaluation? • To find out how effective the project is • To determine whether goals have been achieved • To learn how well things are being done • To learn from experience so future activities can be improved ("lessons learned") When do we evaluate? • Periodically • Mid-term • At the end of the project (final evaluation) • After the end of a project (post-evaluation) Who evaluates? • Internal evaluation can be carried out by the project manager and/or project staff. • External evaluations are carried out by donors or individual consultants. What should we evaluate? • Progress in workplan • Establishment of systems • Implementation of planned activities • Achievement of goals • Effectiveness of project • Impact of project • Efficiency/cost-effectiveness of project

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 7

Page 11: M&E Workshop Handout Manual

Handout 2.3

Handout 2.3: INDICATORS 1 What? Indicators are measures used to ascertain or verify that a planned change has occurred. DEFINING GOOD INDICATORS Indicators have to be.... VALID Actually measure the what they are intended to measure; RELIABLE Produce the same results when used more than once to

measure precisely the same phenomenon; RELEVANT Should apply to goal, intermediate goals, outputs; SENSITIVE Sensitive to the situation being observed, reflect changes of

the phenomenon under study; SPECIFIC Measure only what they are intended to measure; OPERATIONAL Be measurable or quantifiable with developed and tested

definitions and reference standards; COST-EFFECTIVE Results should be worth the time and money it costs to

apply/collect them; TIMELY It should be possible to collect data "reasonably quickly".

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 8

Page 12: M&E Workshop Handout Manual

Handout 2.4

Handout 2.4: INDICATORS 2

CRITERIA FOR DECIDING WHAT TO MEASURE • Progress towards final goal, intermediate goals • Needed information - useful information • Data that has most potential to re-direct activities • Balance: Need to know ∼ ability to find out CLASSIFYING INDICATORS IN TERMS OF IMPORTANCE AND EASE OF DATA COLLECTION

Data collection feasibility Importance of indicator

EASY

FEASIBLE

DIFFICULT

HIGH

High priority

Worth collecting if possible

Worth collecting if possible

LOW

Worth collecting part of instrument for "important indicator"

Worth collecting part of instrument for "important indicator"

Low priority

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 9

Page 13: M&E Workshop Handout Manual

Handout 2.5

Handout 2.5: PROJECT HIERARCHY CARE’s TERMINOLOGY FOR PROJECT HIERARCHY

Process Interventions or activities done by project utilising the input

Output The direct result of process; products of project activities

Effect Improvements in access to or quality of resources, and change in practices

Inputs Resources needed by project (i.e., funds, staff, commodities, in-kind)

Impact Sustainable improvements in human conditions or well-being

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 10

Page 14: M&E Workshop Handout Manual

Handout 2.5

TABLE I. HIERARCHY OF OBJECTIVES CARE

terminology Previous

CARE LogFrame

WHAT

CAUSED BY

WHOM

CLAIMED BY

WHOM

TIME-FRAME

Input Input resources needed by project

project staff use them (and are accountable)

100% attributable to the project

within the life of project (continuously)

Process Activity interventions or activities done by the project

project staff do it (and are accountable)

100% attributable to the project

within the life of project (continuously)

Output Output products directly produced by the project

project staff produce it (and are accountable)

100% attributable to the project

within the life of project (when process bears fruit)

Effect Intermediate Goal

reactions and actions of target populations as a consequence of exposure to project interventions

beneficiaries do it, systems reflect it

should be largely attributable to the project, with other influences relatively minor

within the life of project (may require special study to measure)

Impact Final Goal sustainable changes in human conditions or well-being of target population at household level

beneficiaries experience it

attribution is difficult, with other influences substantial and inevitable

sometimes measurable within life of project, but more likely requires post-project evaluation

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 11

Page 15: M&E Workshop Handout Manual

Handout 2.5

TABLE II. SECTORIAL EXAMPLES OF INDICATORS AT DIFFERENT LEVELS

Reproductive Health example

ANR example SEAD example Water example

grants, contracts, donations, commodities, supplies, other in-kind, staff hired and trained, vehicles

purchased, etc

# talks given # home visits # counselling sessions # IEC materials distributed

# staff visits to farming communities # training sessions organized

# staff visits to organize communities # village bank training sessions held

# communities needing water identified # organized to undertake water system installation

# of methods distributed; # of people trained during sessions, visits

# groups of farmers formed and trained

# clients receiving credit # clients participating in savings program

# of new or renovated water systems installed and functioning

# New FP acceptors; # Couple Years of Protection;

# families adopting bio-intensive crop technology area covered with bio-

intensive technology;

# household IGAs with increased working capital / improved production technologies

# of target population using sufficient and safe water supply; increase in per capita

consumption of water Increased

Contraceptive Prevelence Rate

% families who produce enough food to cover lean periods; decreased numberof

children who are malnourished

Increase in net

household income Positive changes in HH

consumption patterns

Reduced morbidity and

mortality from water- and excreta-related diseases

PROCESS Activities done utilizing

the input

OUTPUT The direct result of the

process

EFFECT Change in practices

IMPACT Conditional changes at

household level

INPUT Resources used

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 12

Page 16: M&E Workshop Handout Manual

Handout 2.5

RELATIONSHIP BETWEEN CARE’s PROGRAMS AND HLS FRAMEWORK

CARE Goal Program Goals Clusters Sub Sectors

Household Livelihood Security

Health Security ⇓ Respiratory Problems⇓ Diarrhea ⇓ Measles ⇑ Birth Spacing ⇑ CYP

Educational ⇑ Literacy Rates by

Gender

Economic Security Food Security ⇑ Adequate Access to

Food ⇑ Positive Income Flow

FUNCTION

STRUCTURE

Health & Population

Girls’ & Women’s Education

Income Cluster (SEAD & ANR)

Food Assisted Projects

Water, Sanitation & Environmental

Health

Children’s Health

ReproductiveHealth

Formal & InformalEducation

Food for Work, Cash For Work & Safety Nets

Financial & Non-financial

Services

Cropping & Livestock Systems, Natural

Resources & Biodiversity

Cross Cutting Themes: Partnership Community Participation Sustainability Relief Rehabilitation Development

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 13

Page 17: M&E Workshop Handout Manual

Handout 2.5

TABLE III. COMPARISONS OF DIFFERENT CARE REPORTING FORMATS

TOOL

PIR

API Project

evaluation HLS Program

Evaluation Type of Indicator (framework)

INPUTS PROCESS OUTPUTS

OUTPUTS EFFECTS

EFFECTS potential IMPACT

IMPACT

Main Purpose

Day to day management at the country level

1) Quantified picture of “what CARE is doing” 2) Report of key indicators projects’ and CO’s achievements 3) Provide aggregated data of sectoral portfolio

Mid-term: assess changes needed in implementation Final: Assess systems and behavioral changes accomplished

Assess improvement in: a) household livelihood security; b) institutional capacity

Main Users PM CO

RMU, PAD Sector Marketing (ER)

PM CO Donors

CD PAD RM

Other users PAD Sector Marketing (ER) Donors

Program SVP Donors CO PM

RMU PAD Sector

HQ SMT President Board

Periodicity Quarterly

Annual Mid term Final

Baseline

Optional Post-Project ( 3-5 yrs after the project ends)

Every 3-5 years

Format Narrative, Tables: achievements vs. targets

Questionnaire, quantitative data

Narrative report Research

Report

Data source Field staff Monitoring data

PIRs Project Documents External assessment May include survey

HLS AssessmentPost-Project evaluations

Method of collection

Continuous Monitoring

Summary of quarterly PIRs

Research / Survey / interviews

Research / Survey

Person responsible for Implementing

Project Manager / Project MIS staff

Project Manager / Project MIS staff

Project Manager / CO Sector Coordinator / CO M&E Coordinator

ACD / CO M&E Coordinator + PHLS

Person responsible for supervising

Country Director / ACD

Country Director / ACD

Country Director / ACD

Country Director Global: PHLS, RMU + PAD

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 14

Page 18: M&E Workshop Handout Manual

Handout 2.5

DATA FLOW

Activities aimingfor Purpose #1

1. W2. X3. Y4. Z

Activities aimingfor Purpose #2

1. A2. B3. C4. D

Activities aimingfor Purpose #3

1. M2. N3. O4. P

Output #1

Output #3

Output #2

Effect # 1

Effect # 2

Impact at theHopusehold

Level

The effect can beassesed by theanalysis of one orseveral outputs.

Processes: Projectmanager

Outputs: Projectmanager, Countrydirector, TAG, RM

Effect: Projectmanager, CD, TAG,RM

Impact: CD, RM, TAG,SMT, SVPs. President,Board

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 15

Page 19: M&E Workshop Handout Manual

Handout 2.6

Handout 2.6: GLOSSARY Proposed Standard CARE Definitions of Key Terms What follow are proposed definitions of some of the more common terms used with regard to LogFrames and the API. Impact: sustainable, significant improvements in human conditions or well-being, reflecting the satisfaction of basic needs. This is the final goal level for CARE projects. Needs are basic if they must be satisfied in order to secure the physical development of the individual according to their genetic potential. Basic needs include food, health services, favorable environmental conditions (potable water, shelter, sanitation), primary education, and community participation. To obtain the essential resources necessary to meet basic needs, households must have adequate access to finances, skills, time and social positions. These conditional or well-being changes represent program level impacts. (It is recognized that changes at the impact level are often influenced by other factors as well as those directly addressed by a project). Effect: changes in behaviors and practices that result from the use of training and services provided by a project, or improvements in the access to or quality of resources. This is the intermediate goal level for CARE projects. Effects can be seen in changed behavior (i.e. when individuals put into practice what they have learned through a project’s outputs). There can also be effects seen through systemic changes (i.e. of institutions). Note: Outcomes (or results) often refer to all that happens as a consequence of a project’s interventions. These are more specifically divided into effects and impacts. Outputs: the direct results of project activities. Project outputs may refer to: 1) The results of training, such as the number of women trained in improved nutrition, farmers in improved agricultural techniques, etc. Note that this should include measures of changes in knowledge and attitude. In other words, it is not sufficient to count how many people attended a course, but some measure of how many actually learned what was being taught. 2) Capacity building, such as the number of extension staff trained, water systems built, committees established, etc.; 3) Service outputs, such as an increase in the number of program locations. 4) Service utilization, such as the number of people fed, or number or patients treated. Outputs are the products the project produces. Indicators of outputs are typically derived from the routine monitoring of project-based data. When compared to resources used for inputs, a measure of outputs can be utilized to make at least a rough assessment of the efficiency of project performance. Processes: the intervention or set of activities through which project inputs are used to obtain the expected results. This is the activity level for CARE. These activities include management and supervision, counterpart training, service delivery, technical assistance and information and evaluation systems. Inputs: the set of resources that are needed by a project. These include the human and financial resources, physical facilities, equipment, materials, logistics, in-kind contributions and operational polices that enable program services to be delivered. The monitoring of inputs are typical functions undertaken by most monitoring (or tracking) systems concerned with basic management and accountability.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 16

Page 20: M&E Workshop Handout Manual

Handout 2.6

Indicators: primary or summary measures used to demonstrate change as a result of a project intervention. Indicators may express quantitative elements (i.e. be written as numbers -- which is useful for reports such as the API) or qualitative aspects (i.e. words -- which may better describe the condition and quality involved). An indicator is a ‘marker’ which, when used over time, shows what progress has been made. It is a variable which can be measured at various times, such as baseline, during project implementation, and final evaluation. Since this is such a key term, an extended description is warranted. Note that there can be indicators for measuring progress at every level: input, process, output, effect and impact. Note also that an indicator refers to what is being measured. When a target for achievement by a certain time is attached to an indicator it becomes, by definition, a goal or an objective. The term “indicator” has been used by some to specifically refer to a sub-Intermediate Goal. However, this alternative definition can cause confusion. Note that in the CARE Program Measurement Framework and API descriptions there are frequent references to indicators for all levels from input to impact. At any level an indicator can be turned into a goal or objective by specifying a target and a date for achievement. The preferred term for a sub-Intermediate Goal would be a “supporting objective.” At different levels indicators can be composites of lower-level indicators (or indexes or variables). General indicators usually require more specific indicators or variables to clearly define how they are to be measured. (State “as measured by ...” in Means of Verification [MOV]). Characteristics of good indicators include the following:

• relevant (to the respective level of a logframe hierarchy) • measurable (assessable) (for the API indicators: need to be quantifiable, but

there can be qualitative indicators) • realistic (based on data which can be obtained or measured with reasonable

means) • objectively verifiable (someone else could agree with the same findings) • reliable (they consistently measure what they’re supposed to) • valid (they measure what they purport to measure) • meaningful (significant to what the project seeks to accomplish) • useable (by the managers and other people involved in a project) • comprehensible (simple to understand) • sensitive (capable of demonstrating changes in the situation being observed) • timely (it should be possible to collect and analyze the data reasonably quickly) • cost effective (the value of having the data is worth the time and effort required to

collect it) Index: the product from a composite of variables or indicators that, through their combination, or calculated by an industry-accepted formula, produce data that permits comparisons between projects or programs. Variable: a quantity of measurement which in any given situation may change by increase or decrease. A variable can be thought of as a primary indicator; the thing one measures directly. Usually obtained by direct observation, or through replies to a specific question by persons being interviewed. Every variable represents a separate data point.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 17

Page 21: M&E Workshop Handout Manual

Handout 2.6

API: Annual Project Information (referring to the primary source of the data). (Also Annual Portfolio Inventory, referring to its use in the aggregate form). A CARE reporting instrument designed to measure and communicate project outputs/results. (It replaces the Basic Data Questionnaire of previous years.) The API was initiated in response to needs by HQ, though it is intended to be useful to others as well, including COs and PMs. PIR: Project Implementation Report. A reporting instrument that updates progress in relation to the project’s annual implementation plan. It includes both a descriptive section and a matrix in which the project’s performance by indicators and intermediate goals is quantified. Main users are those who need to do more detailed project analysis, which may include Project Managers, CO Sectoral Coordinators, Country Offices, and (in come cases) PAD Sectoral Coordinators. These reports are often useful for partners and donors. Ideally data needed for the API are included within the PIR. Direct Beneficiaries: Individuals who receive services or resources directly from CARE or through a joint implementation partner. This category would typically include what projects often refer to as participants, recipients, clients. (Note: It needs to be made clear which beneficiaries are reached directly by CARE and which are reached directly by a partner institution. This information will be recorded the partnership matrix on the API form.) Net total Direct Beneficiaries: “Net” beneficiaries refers to those people who were reached by one or more of the project’s interventions. If, for example, in a multi-sectoral project a woman gets prenatal care, a water system is installed near her home, she learns how to grow vegetables, she takes literacy classes and she participates in a credit program, she is only counted as one direct beneficiary, even though she benefited in much more than one way. In other words, the “net direct beneficiary” figure is not a total of the beneficiaries counted for each indicator. Each of those indicators has its purpose, but to add them all up would be “double counting.”

This will need to be the PM’s best estimate based on the records kept by the project. If the project’s information system keeps track of beneficiaries by name, it should be possible to count the net total of persons reached, even though some of them were reached in more than one way. Otherwise the project staff will need to make an educated guess as to how many people benefited, discounting “double counting” which would occur if simply adding up totals counted by indicators of different interventions.

Note: The following will NOT be included on the API, due to the inconsistency with which these numbers are generated: Indirect Beneficiaries: Individuals who benefit indirectly from the project. This is commonly determined by simply multiplying the number of direct beneficiaries by an assumed average number of persons per family, or other factors. The ratios used by individual projects have varied greatly. Though there may be legitimate and valid reasons for these, where estimates of indirect beneficiaries are needed at the aggregate level (e.g. by ER), past ratios of direct to indirect beneficiaries could be used by HQ. Secondary Beneficiaries: Individuals who adopt an innovation which they learned about from a direct beneficiary (second hand), such as neighboring farmers who learned about a new technique from farmers who were trained by CARE. This is also known as the “diffusion effect” or the “multiplier effect,” and is thus a desirable indicator of a project’s wider replication, effectiveness and sustainability. It is important for CARE projects to account for the spread of a project’s effect through secondary beneficiaries, but it is not always feasible to obtain this information through

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 18

Page 22: M&E Workshop Handout Manual

Handout 2.6

routine monitoring. This is why this data is not called for on the API. However, such measures of the “diffusion” or “multiplier” effect should be included in periodic surveys such as those conducted as a part of project evaluations. (Another definition of secondary beneficiary is those persons reached directly by partner institutions, even if not directly by CARE. As explained in the section on partnerships in the API document, secondary beneficiaries defined in this way are to be included in the API data. The fact that they were not reached directly by CARE will be noted on the Partnership Matrix.) Some additional definitions: Disaster Response: A sum of decisions and actions taken during and after a disaster, including immediate relief, rehabilitation and reconstruction. (Definition used by UNHCR) Relief: As defined by the UNHCR, Relief is defined as assistance and/or intervention during or after a disaster to meet the life preservation and basic subsistence needs. It can be of emergency or protracted duration. Relief has two aspects: 1) Emergency Response, quick fix activities. These are way stations, latrine trenches, high protein biscuits, tents and other quick fix camp shelter stuff, emergency health care, bladders for water supply, relief items, e.g. blankets, clothing, soap, etc.. 2) Sustained Relief. These are longer term relief efforts, e.g. camps and camp activities for IDPs or refugees. They can include health activities with displaced populations; free food distributions (e.g. in Haiti or Ethiopia), where the population cannot sustain itself. There might be other safety net activities, such as water or sanitation, e.g. providing water in camps. There also could be some relief items, e.g. blankets, clothing, soap, CIKs like soccer balls for camp populations. Much of the above is a "we do for", but some is also done using development approaches to working with people. Rehabilitation The operations and decisions taken after a disaster with a view to restoring a stricken community to its former living conditions, while encouraging and facilitating the necessary adjustments to the changes caused by the disaster. (UNHCR) Moving along the continuum from relief towards development. This includes getting people home and responding to their critical needs. It may involve help with more permanent shelters/schools, rehab or rebuilding community water systems, credit for building new homes, provision of seeds and tools, de-mining. All of this should be done using development methodologies, or at least keeping in mind the potential for longer term sustainability. Quality: Provision of superior services to all project beneficiaries/clients. Quality can be ascertained by the range of choices that clients have, the completeness of the information given to clients, technical competence of the provider, quality of interpersonal relations, and appropriateness of services provided. Quantitative data deals with numbers. It is easier to aggregate, do statistical analysis on and display in tables than is qualitative data, but can be subject to misinterpretation (may “miss the point.”) Quantitative methods commonly involve surveys and questionnaires which ask for information which can be collected in numerical form.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 19

Page 23: M&E Workshop Handout Manual

Handout 2.6

Qualitative data is descriptive, complementing quantitative data with contextual information to give a more holistic picture of a situation. Qualitative techniques commonly include in-depth, open-ended interviews, direct observation, and written documents. (Even though qualitative data is not as easy to analyze statistically, there are software programs which enable rigorous analysis.) (A combination of qualitative and quantitative information is needed for a fair evaluation. One should aim for increased quality in the gathering and analysis of both quantitative and qualitative data. Following the characteristics of good indicators listed above, and using appropriate monitoring and evaluation methodologies helps assure such quality.) Sustainability: 1) Potential for project’s impact to continue after CARE’s intervention terminates. 2) Capacity of beneficiaries to be able to continue to practice an innovation or technique without continued project intervention. 3) Capacity of local institutions to continue project activities after the project ends. This may include self-financing the activities via contributions of users of the goods and services provided, complementary funding from local funding sources, and decreasing dependency on continued funding from external sources. 4) Sustainability also has an environmental protection aspect. In this case, sustainability refers to the maintenance or enhancement of resource productivity on a long-term basis. Sustainability, therefore, implies a respect and care for all forms of life, an improvement of the quality of human life, a conservation of life-support systems and biodiversity, minimizing the depletion of non-renewable natural resources, and enabling communities to care for their own environment. Scale: The extent to which the project serves large numbers of beneficiaries, or indirectly has a positive effect on large numbers of people. Scale is essential to achieving both sustainability of service delivery and significant impact among the target populations. Equity: The extent to which the resources and opportunities generated by the project are equally distributed within and among households. It pertains to the allocation of resources according to gender, ethnic affiliation, social status and class. Efficiency: The extent to which a project uses resources appropriately and completes activities in a timely fashion. Effectiveness: The extent to which a project makes desired changes or meets its objectives through the delivery of services. Cost-effectiveness: a measure of how much it costs to achieve results (usually in terms of project outputs). Though in simple terms a ratio of benefits divided by costs, a benefit:cost ratio is quite difficult to measure without making many assumptions, including attaching a monetary value to benefits. Nevertheless, it is important to develop systems (including cost-accounting and monitoring of benefits) and analytical frameworks which enable us to determine relative cost-effectiveness of various interventions, even if doing so to any degree of precision may be challenging.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 20

Page 24: M&E Workshop Handout Manual

Handout 2.6

Handout 2.7: WHO NEEDS INFORMATION ABOUT PROJECTS?

Consider the range of participants and stakeholders who have needs for information about CARE projects, and who are relied upon to provide and pass along portions of the data set. Let’s begin with the primary focus -- the individual household. Assuming that in that household’s community there is a Community-Based Organization (CBO) involved, that CBO asks the household to share certain information about what is happening in the lives of its members -- basically indicators pertaining in some way to household livelihood security (HLS). The CBO will need to have the kind of relationship with the household which engenders a willingness to share this information. That includes making that household feel a part in the ownership in what the CBO is doing, and providing it with appropriate feedback on why the information being collected is needed and how it is being used. The CBO thus collects information about all of the households in its community. Let us further assume that there is a local Non-Governmental Organization (NGO) in the area which relates to a number of CBOs. The NGO will ask the member CBOs to pass on information about what’s going on in the lives of their households. This, again, requires a relationship in which the CBOs share a feeling of partnership with the NGO and understand the mutual need for this information. Further developing our hypothetical model, a CARE project in that part of the country has a partnership relationship with one or more such NGOs. It will need to develop a relationship with them which includes an appreciation for the need to gather and pass on certain kinds of information, and shares in the analysis and use of the data generated. Appropriate forms of feedback from the CARE project to the NGO can engender this relationship. The Sector Coordinator in the CARE Country Office asks each of the projects in that sector to pass on information about what is being accomplished by the project, which includes data which originated from the household level. The Sector Coordinator performs a fairly detailed level of analysis and provides feedback to the Project Manager. Similarly, the Country Director (CD) and Assistant Country Director (ACD) for Program (and the CO Senior Management Team) have needs for country-wide information to keep in touch with how projects and sectors are performing. This information is needed for making decisions at their level, including decisions about possible changes needed in the support of present projects and plans for the design of future projects. A number of stakeholders in the CARE Headquarters in Atlanta have needs for key information about projects. These include the Regional Management Units (RMUs), the Program Analysis and Development (PAD) Sector Coordinators, and those in the External Relations (ER) Division who need to prepare fundraising proposals and reports to donors. And the Senior Vice President (SVP) for Program needs information in a form useful for reporting to the President and Board. But the needs for information on CARE projects does not stop there. There are additional audiences who are interested in various forms of reports. These include other CARE International (CI) headquarters, other international NGOs and others who study and want to learn lessons learned from CARE’s relief, rehabilitation and development assistance process. Of course the donors who support one or more projects are keenly interested in information about achievement of results.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 21

Page 25: M&E Workshop Handout Manual

Handout 2.6

A depiction of these various levels of stakeholders and a simplified information-flow are shown in the diagram on the next page. It needs to be remembered that similar considerations of the quality of relationships between levels described at the HH-CBO-NGO-Project levels pertains to other levels as well. I.e., each of us needs to have an understanding of the need for data, how it is to be analyzed and used, and how that information benefits our work. This includes appropriate forms of feedback to “lower” levels to let them know how the information was used, which could include how they compare with others in the “bigger picture.” The purpose of our going through this listing of the various sources and users of CARE project data to remind us of who else is involved. At each level there tends be an assumption that data needs to be collected and aggregated for our own use. Thus we need to recognize that the pyramid of information needs within CARE addressed by the API is only a piece of a larger picture.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 22

Page 26: M&E Workshop Handout Manual

Handout 2.7

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 23

Other CBOs . . . . . . Other CBOs

Other NGOs . . . . . . Other NGOs

Other projects . . . . . . Other projects

Other COs Other COs

Other lead countries . . . . . . Other lead countries

KEY: Main flow of information: Feedback

Household Other households . . . . . . Other households

CBO

NGO

CARE Project

CO: Sector Coordinator, ACD, CD

Atlanta: RMU / PAD / SVP / ER / Board

CARE International

Bilateral Donors -- Individual Donors

Other PVOs -- Students of Development

From: Program Measurement Framework, CARE USA, January 1997

Page 27: M&E Workshop Handout Manual

Handout 2.8

Handout 2.8: EXERCISE

Match the following terms with their definition __ Impact

The process of gathering and analyzing information to determine whether the project is generating its planned activities and the extent to which the project is achieving its stated goals through these activities.

__ Monitoring

Direct result of process, products of project activities.

__ Indicators

Sustainable improvements in human conditions or well-being.

__ Problem analysis

Describes a set of cause and effect relationships among system variables

__ Evaluation

Resources needed by project.

__ Output

Improvements in access to or quality of resources, and changes in practices

__ Effect indicators

The process of routinely gathering information on all aspects of the project.

__ Process

Measures used to ascertain or verify that a planned change has occurred.

__ Inputs

Measurements which describe the change in condition or behavior as a result of achieving an intermediate goal.

__ Effect

Activities carried out by a project which convert inputs into planned outputs.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 24

Page 28: M&E Workshop Handout Manual

Handout 3.1

Handout 3.1: PROBLEM ANALYSIS HIERARCHY

Problem Analysis describes a set of complex relationships among system variables

SPECIFIC CONDITIONAL CAUSES CONTRIBUTING TO THE PROBLEM

PROBLEMon

Condition Condition

Condition Condition

Multiple Levels - Primary (Direct, Immediate) - Secondary (Indirect) - Tertiary - etc

Usually identified

CARE projects usually not focused at this level

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 25

Page 29: M&E Workshop Handout Manual

Handout 3.1

PROBLEM

Condition Condition

Condition Condition

Behavior Behavior

SPECIFIC BEHAVIORAL CAUSES CONTRIBUTING TO CONDITIONS

• Can also be multiple levels

• Often requires Needs Assessment or other information to identify/verify

• A common level for CARE projects to target

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 26

Page 30: M&E Workshop Handout Manual

Handout 3.1

Condition

PROBLEM

Condition

Condition Condition

Behavior Behavior

Attitude Knowledge Belief

SPECIFIC ATTITUDES/KNOWLEDGE/BELIEFS CONTRIBUTING TO BEHAVIOR

Usually only one level identified

Often requires Needs Assessment to identify

CARE projects rarely target this level, but interventions can be designed around attitudes, knowledge and beliefs

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 27

Page 31: M&E Workshop Handout Manual

Handout 3.1

IDENTIFIED PROBLEM

Condition Condition

Condition Condition

Behavior Behavior

Belief Knowledge Attitude

General Social, Cultural and Political Factors

GENERAL FACTORS WHICH SHAPE THE CONTEXT

Usually beyond the scope of CARE projects to address

Project design, however, needs to take these factors into account

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 28

Page 32: M&E Workshop Handout Manual

Handout 3.2

Handout 3.2: CHILD MALNUTRITION PROBLEM ANALYSIS Farmers Unwilling Men Control All Mothers Do Not Rural HHs are Very A Low Value is Doctors Want a People to Change Practices HH Income Perceive a Problem Superstitious Afforded to High Income Unaware of in Malnutrition Education Proper Sani-Practices Farmers Women Have No Mothers Do Not Foods are Classified Parents are Most Doctors are HHs Don't Slash Access to Credit/ Participate in on the Basis of Uneducated Unwilling to Install Screens & Burn Capital Child Feeding Folklore Work in or Plumbing in

Programs Rural Areas Houses High Soil , High Farm HHs Lack Mothers Do Families Don't Attend High Few Gov't Clinics are Far Houses Open Erosion, Inflation Income-Generating Not Have Nutrition Education Illiteracy Health From Villages To Flies & Opportunities Prenatal Care Programs Rates Care Programs Rodents Low Agricultural Low Income Improper Lack of Nutrition Inadequate Unsanitary HH Production Weaning Knowledge Health Care Conditions

Inadequate Quantity Poor Nutritional Quality Predominance of of Food Provided to of Food Prepared For and Infectious Diseases Children Consumed By Children

HIGH MALNUTRITION RATES AMONG CHILDREN UNDER 5 IN MAWAWASI PROVINCE Abnormally Low Physical High Mortality in HH Incurs Higher High Rate of Malnutrition Malnourished Children Do Not Become And Mental Child Growth Children Under 5 Health Care Costs -Induced Diseases Productive Members of HH and Development

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 29

Page 33: M&E Workshop Handout Manual

Handout 3.3

Handout 3.3: USES FOR PROBLEM ANALYSIS USES FOR PROBLEM ANALYSIS IN THE DESIGN OF CARE PROJECTS

• A hierarchical analysis of cause and effect

• Development of guidelines for a needs assessment

• Process for utilizing information obtained from needs assessment

• Selection of causal level to address

• Selection of appropriate output, effect and impact indicators

• Exploration of sectoral interactions

• Buy-in to project design for staff, partners, beneficiaries, donors, etc.

• Assessment of relative contribution of various cause-effect streams to the problem

• Assessment of impact potential

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 30

Page 34: M&E Workshop Handout Manual

Handout 3.4

Handout 3.4: CAUSE AND EFFECT LOGIC IN A TYPICAL PROJECT Levels of the Output of the Problem Output of the Strategy Hierarchy of Final Cause and Effect Problem Analysis Analysis/Needs Assessment Selection and Intermediate Goals Logic Broad Condition Problem Problem Final Goal Impact (poverty) (low income) (low income) (increase income) (change in the condition) Specific Condition Direct (Immediate) Causes (low production) (high soil erosion/low fertility)

Indirect (Secondary) Causes Causes Intermediate Effects (farmers plant steep slopes) (farmers plant steep slopes) Goals (change in systems

(farmers will adopt new or behavior) Behaviors Behavioral Causes cropping practices) (poor planting practices) (farmers plant in rows; do not

practice contour planting) Attitudes and Beliefs (tradition must be followed) Social, Political, Economic, Interventions Outputs Outputs Cultural, and Environmental (extension education) (establish extension system) (goods and services Factors produced by project) (no access to improved technologies)

Activities Processes (hire and recruit extension (activities to turn workers; organize farmer inputs into outputs) groups; conduct demonstration tasks) Resources (labor, cash, Inputs M&E necessary to establish (resources necessary to extension education) achieve outputs)

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 31

Page 35: M&E Workshop Handout Manual

Handout 3.5

Handout 3.5: LOGFRAME OF A TYPICAL CARE PROJECT NARRATIVE SUMMARY O.V. INDICATORS MEANS OF

VERIFICATION ASSUMPTIONS

Final Goal: Improved reproductive health of women and men in Kabale, Kisoro and Rukungiri Districts by 2001

* Statistically significant decrease in maternal mortality and total fertility rates

* DHS 1995, 1999 * Surveillance reports * National surveys

* Govt continues its commitment to a national Reproductive Health Program

* Country continues to enjoy political and civil stability

* Decentralization is working effectively * Improving quality and access will

increase consumer use of Reproductive Health Services

Behavioral Intermediate Goals:

1. Increased use of modern family planning methods in the project area

Increase at End of project in: * New acceptors from 20,000 to 70,000 * Continuing users from 21,000 to 80,000 * CYP from 19,000 to 60,000 * CPR (modern methods) from 10.5% to 15%

* Service delivery statistics from HMIS, SDPs and CRHWs reports

* survey * Project mid-term

survey 1999

* Communities will support family planning

* Difference between client visits and new acceptors on HMIS of the MOH form equals continuing users

* Clients go for services

2. Increased number of women seeking maternal health services at a health facility

Increase by End of project in: * Women delivering at service delivery points from 24%

- 45% of all pregnant women in the project area * Women attending anti natal services at least 2 time

per a given pregnancy to be 46,000 * Women attending post natal (PNC) services to be

50% of antenatal cases (ANC) * Obstetric emergency presenting at a qualified obstetric

emergency unit is 40% of identified cases

* HMIS records * CRHW records * CREHP clinic and

CRHW data on ante natal, post natal, and EOC services

* Final evaluation survey

* The new HMIS is operational in the 3 project districts

* Obstetric services are functioning and acceptable and affordable

* ANC accessible, affordable and efficient

* PNC incorporated into SDP services * Improved quality of services and

knowledge will increase use 3. Increased demand,

prevention, diagnosis and treatment services for sexually transmitted infections (excluding HIV/AIDS)

50% increase by the End of Project in: * Annual number of men, women, adolescents

presenting at SDPs for STIs diagnosis and treatment due to increased awareness of the problem and improved services

* Annual # of condoms distributed or sold to end users * 30% reduction of women, men and adolescents

reporting risk behavior

* HMIS records * Clinic treatment

records * CREHP referral

forms * CRHWs records

* World Bank STI program operates effectively in all three districts

* Syndromic rather than laboratory diagnosis is acceptable for clients

* Adequate STI drugs are at parish level SDPs

* Men will use condoms to prevent infection

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 32

Page 36: M&E Workshop Handout Manual

Handout 3.5

4. Increased knowledge in reproductive health among women, men and adolescents

* % of women and men who have knowledge of at least 2 benefits of family increased from 23% to 50%

* % of women &men who can mention 2 FP methods increased from 35% - 60%

* % of women and men who know a source for contraceptives increased from 50% to 80%

* 50% increase over baseline of women and men who know the location of the closest EOC

* 50% increase over baseline of women and men who can mention at least 2 benefits of ante natal care

* 50% increase over baseline of women and men who can name 1 benefit of post natal services

* 50% increase over baseline of women and men and adolescents who can name a source of STI treatment

* 1995 CREHP final survey

* Operations research on IEC

* Baseline/final survey

* There will be community interest and participation

Systemic Intermediate Goal:

Improved quality services for family planning, maternal health and sexually transmitted infections at service delivery points and community levels

By the end of the project: * 50% of trained clinic health workers provide quality

selected Reproductive Health Services * 60% of CRHWs are active (as defined by the project) * 70% of selected service delivery points equipped with

reproductive health kits by the project have functioning equipment in place

* SDPs and CRHWs have appropriate method mix * 50% of clients express satisfaction with services

offered by service delivery points and CRHWs

* Training reports * Supervision reports * QOC scores * CRHW records * Contraceptive

logistics records * Client exit interviews * Operations research * SDP surveys

* DHT and communities support all CREHP RH services offered

* DMOs will assume responsibility for equipment maintenance

* GOU (MOH) will clear and exempt VAT on imported CARE equipment

* MOH/DMO logistics system is in place * District administration will provide funds

to transport STI drugs and contraceptives

* GOU and district administration will provide DMO health workers with a living wage

* Better educated consumers will press for improved quality services

Outputs 375 clinic-based health

workers trained to provide high quality, integrated services

* 25 DMO trainers trained in RH * 375 health workers trained to provide integrated RH

care annually * 34 supervisors trained to provide RH quality care

supervision * 60% of trained health workers attain satisfactory QOC

scores at 6 months intervals post training

* Training reports * CREHP and DMO

planning reports * Supervision reports * QOC forms

* DMOs will release trainers for training workshops

* Health workers are motivated and enabled to improve skills and quality of care

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 33

Page 37: M&E Workshop Handout Manual

Handout 3.5

420 CRHWs trained, equipped and encouraged to provide quality basic RH services

* 420 CRHW Trainers trained in basic RH, IEC, and sales techniques

* 420 trained CRHWs regularly supplied with condoms and OCs and equipped with standard project CRHW kits

* 10 CRHW associations formed * 245 community leaders trained in elementary RH care

and communications techniques * 60% of CRHWs attain satisfactory QOC scores at 6

month intervals post training

* Training reports * Field Officers reports * Supervisors’ reports * CREHP logistics

records * CRHW records * CBO surveys

* Communities will identify appropriate CRHWs

* CRHW associations will actively support members in their community RH work

* Appropriate CBOs will be interested in working with the project

Selected SDPs supplied with RH kits

* 80 dispensaries, posts and private clinics equipped with basic FP kits

* 25 Health Centers and maternity units equipped with FP and basic obstetric kits

* 6 hospitals equipped with FP and obstetric emergency kits

* 4 hospitals equipped with surgical contraception kits

* CARE records * Funding approved * VAT exemption by GOU on equipment

and supplies

50% of SDPs regularly offer integrated basic RH services at EOP

* 50% SDPs with CREHP trained staff and equipment offer the three basic RH interventions

* Supervision reports * SDP client records * Client exit interviews * Clinic surveys

* DMOs and in-charges are committed to integrated services

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 34

Page 38: M&E Workshop Handout Manual

Handout 3.5

How do monitoring and evaluation relate to the project logframe and the cause-effect hierarchy? As with the Logframe, the structure of an M&E system is characterised by the same hierarchy in exactly the same way. The following table shows how each level of the cause-effect hierarchy links with specific monitoring and evaluation assessments. RELATIONSHIP OF M&E TO PROJECT CAUSE-EFFECT LOGIC

Hierarchy of Cause-Effect Logic

Types of

Information

Monitoring Activities

Evaluation Activities

Final Goal

Impacts (fundamental changes in the lives of the target population)

Few, if any

Baseline, then summative or ex-post evaluation

Intermediate Goals

Effects (behavioral and systemic changes)

May use periodic assessment to measure change, but more evaluative

Formative and Summative evaluation, annual reviews

Outputs

Outputs (goods and services produced by the project)

Regular measurement and Reporting

Formative and Summative evaluation, annual reviews

Processes (Activities)

Activity targets

Regular measurement and Reporting

Usually assessed thru use of key questions; analysis of monitoring data

Inputs

Planned Inputs

Financial Accounting

Analysis of financial and other monitoring data during evaluation

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 35

Page 39: M&E Workshop Handout Manual

Handout 3.6

Handout 3.6: WHAT ARE INDICATORS AND HOW ARE THEY USED

Definitions of Indicators To better understand what indicators are, it is worth reviewing some of the more common definitions used for indicators. According to USAID:

An indicator is a unit of measurement which facilitates concise, comprehensive, and balanced judgments about a situation. It is subject to the interpretation that if its level changes in the "right" direction, things have gotten better and if the level changes in the "wrong" direction, things are getting worse, or people are "worse off."

Indicators may be identical to the specific objective (direct), substitute for the objective (indirect or proxy), or supplement the objective by describing certain qualities.

Using a single indicator cannot give a comprehensive picture of change so multiple indicators are often needed.

Practical Concepts Incorporated (PCI) in discussing "The Logical Framework" has this to say about indicators;

Indicators are defined as those conditions that are so strictly associated with certain other conditions (of the situation in question) that presence of or variation in the former indicates the presence of or variation of the latter. Therefore, indicators are plausible.

Indicators demonstrate results and are not conditions necessary to achieve those results. They are independent. Indicators are not used to demonstrate achievement by measuring the means to achieve the result.

In CARE, we will use the following definition: A variable, measure or criterion used to assist in verifying whether a proposed event or change has occurred. In CARE project, we develop indicators for two reasons: 1. To measure attainment of inputs, processes, outputs, effects and impacts related to our project design hierarchy.

2. To evaluate key questions in the evaluation of projects and programs. Procedure for Determining Indicators A convenient procedure for developing indicators involves the following steps: 1. Define the situation to be measured. 2. Determine the classification of analysis to be applied to investigate the situation. This is

done by converting the interrogative into the classification of analysis. Below are some common interrogatives and their classification.

Interrogative Classification of Analysis how much amount, quantity, number (frequency) how many amount, quantity, number (frequency)

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 36

Page 40: M&E Workshop Handout Manual

Handout 3.6

how long period of time why reasons where location when point or period in time who person, people how ways, methods, techniques which options able to (can, could) ability, potential, possibility will predictions what specifies, can mean which or why would, if possibilities

3. Develop the conditions that will be used to better understand the situation. Conditions

reflect the state of being of a particular being or thing (situation). They are the core of an indicator and are used to give a concise, comprehensive, and balanced understanding of the situation.

4. Determine the numerical form to be used to measure the conditions. Options for

numerical forms include frequency counts of the number of times events take place and ratios, usually percentages and proportions.

5. Write the indicator:

INDICATOR = NUMERICAL FORM + CONDITION In practice, the procedure might work the following way: Question: How do farmers currently apply liquid pesticides?

1. Situation: After examining the question carefully, the situation in question could be stated as application of pesticides.

2. Classification of Analysis: How implies ways, methods, or techniques.

3. Conditions: To get a concise, comprehensive, and balanced understanding of methods farmers use to apply pesticide, the following conditions could be used:

a) use of hand pump sprayers. b) use of the coke bottle method. c) use other methods (specify methods).

4. Numerical Form: Percentage of farmers can be used to give a fairly clear picture of how many.

5. Timeframe (optional): The question asks how farmers are currently applying pesticides which implies at the present time. Current needs to be stated in a specific timeframe which will dictate for when data must be collected. We might say during the last 3 months or May, June, and July.

6. Indicators: So the indicators would be written in the following way:

a) % of farmers who properly used hand pump sprayers to apply pesticides.

b) % of farmers who properly used the coke bottle method to apply pesticides two growing seasons after training.

c) % of farmers who used x,y,z (other methods) to apply pesticides at the end of the project.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 37

Page 41: M&E Workshop Handout Manual

Handout 3.6

Questions, Situations, and Indicators Below are some examples of questions, situations, and indicators. Examine the indicators for each question and practice identifying the classification of analysis, conditions, and timeframe (these steps have been omitted). Key questions for analysis Situation Indicators 1. How many women are currently participating in the small economic activities program?

Women's participation in the small economic activities program.

# of women who report they knew about the program between June and July, 1998. # of active members in the program between June and July, 1998. # of women who report during the survey (August) their husbands prefer they not participate.

2. How do farmers feel about using fertilizer?

farmers' attitudes about the use of fertilizer.

% of farmers who believe during the survey (August) fertilizer increases production. % of farmers who felt during the survey (August) fertilizer is too expensive. % of farmers who report they used fertilizer between June and July, 1998.

3. How effective were the health promoters in communities last quarter? % of mothers who can demonstrate how to mix ORT home solution during the survey (April).

Effectiveness of health promoters in communities.

Average # days per month health promoters spend in communities between January and March, 1997.

4. Why didn't mothers bring their children to be vaccinated during the last campaign?

Mothers' non-participation in the last vaccination campaign.

% of families who knew about the vaccination program between June and July, 1998. % of mothers who had to travel more than 2 kilometers to the vaccination post between June and July, 1998. % of mothers giving reasons x,y,z (other reasons)

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 38

Page 42: M&E Workshop Handout Manual

Handout 3.7

Handout 3.7: INDICATORS

A variable, measure or criterion used to assist in verifying: whether a proposed change has occurred, and thus

whether a final or intermediate goal has been achieved.

Indicators are quantitative or qualitative criteria for success that enable one to measure or assess the achievement of project goals. There are four general types of indicators: • input indicators - describe what goes into the project, such as the number of hours of

training, the amount of money spent, the number of contraceptives distributed; • output indicators - describe project activities such as the number of community workers

trained, the number of family planning acceptors, the number of women enrolled in mothers' clubs;

• effect indicators - describe the change in condition or behavior as a result of achieving

an intermediate goal • impact indicators - measure actual change in conditions of the basic problem identified,

including changes in livelihood status, health, wealth, etc. Input and output indicators are easier to measure than effect and impact indicators, but they provide only an indirect measure of the success of the project. They assume that the achievement of certain activities will result in change, but they don't demonstrate it. They also provide a standard against which to measure, or assess, or show, the progress of an activity against stated targets or benchmarks. Common indicators: input, output, process, effect and impact, leading, trailing, etc. They can be direct or indirect (proxy). Indirect are often impact indicators and used when a direct measure is not feasible or cost effective. Criteria Checklist for Good Indicators: Ideally, indicators should be: Valid they should actually measure what they are supposed to measure; Reliable (i.e., verifiable or objective) - conclusions based on them should be the same if

measured by different people at different times and under different circumstances; Relevant they should apply to final and intermediate goals; Sensitive they should be sensitive to changes in the situation being observed; Specific they should be based on available data; Cost Effective - the results should be worth the time and money it costs to apply them; and Timely - it should be possible to collect the data reasonably quickly.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 39

Page 43: M&E Workshop Handout Manual

Handout 3.7

CHECKLIST FOR INDICATORS _ Is a percentage or an absolute number more appropriate?

Use a percentage when you can accurately quantify the target group or your sample size is adequate for inferring to the general population. Use absolute numbers if your target group changes in size over the life of the project and when it is feasible to sample.

_ Does the indicator reflect what is written in the goal statement? Does it really

measure what it is supposed to measure (i.e. - is it valid)? For example, suppose your goal is to enable community health workers to properly administer oral rehydration salts to children to combat dehydration. Now, if your indicator was the # of children/month who suffered bouts of dehydration would this be a valid indicator? No, although you are interested in seeing dehydration, you would not be directly measuring the behavioral change. You would need another direct indicator to measure the proper administration of salts.

_ Be sure the indicator is not in fact an activity or output.

One common error is to use the number of people trained as an indicator for behavioral change. This, of course, is an output and not a measure of change.

_ Is it realistic?

For example, suppose the goal is to increase agricultural production 20% in the project area and the indicator is going to be simply changes in agricultural production, is this a realistic indicator given the large variation in yearly production (especially in rainfed areas) and the difficulties in getting a large enough sample size.

_ Does the indicator contain terms that are unclear, or that can be interpreted in

different ways? A CARE SEAD project focused on improving production activities for women by providing them with skills training and credit. One indicator was % of women who are able to improve the quality of their products. You can see that the term “quality” could be interpreted in many different ways and would vary from product to product.

_ Can you realistically obtain reliable date regarding the indicator?

This is a very common problem with indicators and some examples include measuring income, crop production averages, and child malnutrition rates. Each of these is creates problems in getting reliable and cost effective data.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 40

Page 44: M&E Workshop Handout Manual

Handout 3.7

COMMON TYPES OF INDICATORS (modified from Feuerstein, 1986) Type of Indicator

What it Shows Example

Availability These show whether something exists and if it is available

Whether there is one trained local worker for every ten households

Relevance How relevant or appropriate something is

Whether new stoves burn less fuel than old stoves

Accessibility Whether what exists is actually within reach of those who need it

A health post in one village may be out of reach of other villages due to mountains, seasonal flooding, inadequate transport, high costs, etc.

Utilization To what extent something has been made available is being used for that purpose

How many non-literate villagers attend literacy classes regularly; # of households using newly constructed latrines

Coverage Show the proportion of those who are in need are actually receiving services

Of the number of people estimated to have tuberculosis in a given area, what percentage are actually receiving regular treatment

Quality Show the quality or standard of something

Whether water is free from harmful, disease-inducing substances; whether water quality meets national standards

Effort How much and what is being invested to achieve an output or effect

The number of trained health workers it takes to effectively vaccinate children in a given geographical area in a given time

Efficiency Whether resources and activities are being put to the best possible use to achieve desired results

The number, frequency and quality of supervisory visits after introducing bicycles to replace heavy vehicles

Effect Whether behavioral or systemic changes are taking place

The number of farmers adopting a particular cropping practice

Impact Whether there is fundamental change

After a campaign against malaria, does the incidence against malaria decline

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 41

Page 45: M&E Workshop Handout Manual

Handout 3.8

Handout 3.8: SMALL GROUP EXERCISE DEVELOPING INDICATORS This exercise is meant to provide you with practice in developing indicators. On page 2, key questions that are typical of those used for evaluation are listed in the left column. The task is to develop 2 or 3 indicators for each question. Listed on page 3 are several intermediate goals from actual CARE projects. For each intermediate goal develop at least two indicators which could be used to measure effect changes. To do so, follow the procedure below for developing indicators. 1. Examine the question carefully. 2. On a different piece of paper, state the situation in question and determine the

classification of analysis. 3. For each situation, ask what conditions would give a concise, comprehensive, and

balanced judgment about the situation to be able to answer the question. 4. List all possible conditions beside each situation. 5. Select the final combination of conditions which will give the most concise,

comprehensive, and balanced understanding of the situation. 6. Write the indicators next to the corresponding questions. Check the indicators to make

sure they satisfy the characteristics of indicators. Key questions 1. Why don't men participate in community health education activities? 2. Did the income of the micro enterprise operators increase last quarter? 3. What was the agricultural production of farmers this quarter? 4. Has the nutrition status of children in the school nutritiion program improved? Intermediate Goal #1: 80 farmers participating in the project organize and efficiently manage a communal seed fund by the end of the first year of the project. Intermediate Goal #2: By September 1998, 700 women borrowers in enterprise management will be effectively managing their small enterprises. Intermediate Goal #3: By the end of the project, 40% or more of village households will properly use improved personal hygiene practices.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 42

Page 46: M&E Workshop Handout Manual

Handout 3.8

SMALL GROUP EXERCISE: M&E PLAN - STEP 1 DRY ZONE AGRICULTURAL DEVELOPMENT PROJECT This exercise is meant to provide you with practice in developing the first part of an M&E Plan based on the CARE/Sri Lanka Dry Zone Agricultural Development Project (DZADP) proposal. You are required to read the information provided on the DZADP - background, logical framework, and the monitoring and evaluation section. We will be developing an M&E plan for this project following the M&E Matrix shown in Handout 3.12. We will base our work on the existing Logical Framework and make modifications to this framework as the week progresses. For this exercise, please complete the following. Step 1. Your group will be assigned to work on the Wider Objective and one Immediate Objective. Remember! In CARE the Wider Objective is also called the Final Goal and Intermediate Objectives are also called Intermediate Goals. They are the same thing. Carefully review the objectives your group has been assigned along with their indicators of achievement (column 2) and how they will be measured (column 3). Step 2. Examine each objective and determine if it is properly written according to CARE standards. Is it clear in its meaning? Is it SMART? Your group should discuss how it wants to write each objective statement and, once decided, print each on a flipchart. Step 3. For each objective carefully examine the indicators. Determine first if these indicators are sufficient for measuring the objectives. Follow guidelines on developing good indicators. HINT: It is often useful to define an overall indicator for measuring a goal, and next decide what criteria (operational definitions) are included in the overall indicator. Based on the criteria then develop sub-indicators that will actuall be measured. Step 4. Decide which indicators are needed for each objective and list these on flipchart paper in a column next to the objective. Be sure that each indicator is clearly written. Pretend that you are going on vacation and before you leave you are giving this M&E plan to your staff so that they can conduct a baseline survey. EACH INDICATOR MUST BE WRITTEN SO THAT IT IS OBVIOUS WHAT IS BEING MEASURED. Step 5. Be prepared to discuss and defend your work with the large group.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 43

Page 47: M&E Workshop Handout Manual

Handout 3.9

Handout 3.9: THREE CATEGORIES OF DATA

Data can be organized into three general categories; responsive, descriptive, and documentary. The particular category determines the data source, collection process, and their content and form. Below are brief descriptions of each category. 1. Responsive Data which are characterized by responses by members of the target population. The source for responsive data is people associated with the project. They are referred to as respondents. The process for collecting these data is interrogative or questioning. Data gathering tools most commonly used to collect responsive data are survey questionnaires, group meetings or discussions, individual structured interviews, focus groups, conversational interviews, ranking, rating, self-reporting instruments, and tests. The form of the data is open, closed, or modified-closed. Open form refers to data which is in the respondents own words (quotes) and is almost always qualitative. Closed form data are limited to a set number of prearranged responses such as multiple choice questions, rating, or ranking. Similar to closed form data are modified-closed form data. The major difference is that the latter has the option for respondents to add to the pre-selected responses. 2. Descriptive (Observation) Data which are characterized by observations. The source for descriptive data is people, animals, or phenomena. Phenomena are occurrences or facts which can be detected or observed. The primary process for collecting these data is observation. However, when a physical phenomenon is the data source, such as water or soil, a physical sample must first be collected, then observed. Data gathering methods frequently used to collect descriptive data are diaries, observation and participant observation using schedules and guides, narrative reports, and physical samples using microscopes or various physical/chemical property analysis tests. Descriptive data forms are the same as those of responsive; open, closed, and modified-closed. Open form data are descriptions by the observer without any sort of pre-classification system. Examples of these data are feelings, interpretations, hunches, and ideas reported in diaries or personal journals. Contrarily, closed data refer to descriptions with a pre-classification system such as observation schedules. Modified-closed data adhere to a pre-classification system but with built-in flexibility for the observer to make additions. 3. Documentary Data which can be found in written form. The source is documents such as project monitoring reports, evaluations, proposals, administrative records, and reference books. The process for collecting documentary data is examination of the documents. Methods of gathering these data include enumeration tallies and direct or narrative recording. Forms of these data are the same as those for the responsive and descriptive categories. Open form data are usually narrative recording without predetermined information needs while closed form data were gathered with predetermined information needs such as enumeration tallies or direct recording. Modified-closed data are those with information needs pre-established but allows the examiner to make additions.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 44

Page 48: M&E Workshop Handout Manual

Handout 3.10

Handout 3.10: HOW TO DETERMINE WHAT DATA TO COLLECT

1. Examine the indicator and identify its numerical form, condition, and timeframe. Numerical form is a criterion that determines the quantity, capacity or dimension of the condition. The condition is a particular state of being of a person or thing which is so closely associated with the situation in question that its presence or any variation implies the presence or variation in the situation. The timeframe refers to the period of time for which the condition pertains. For the indicator, "percentage of households that stored drinking water in a protected container between June 1 and July 30," the numerical form is percentage (of households) while the condition is storage of drinking water in protected containers. The timeframe is between June 1 and July 30. 2. Determine the data needed to construct the indicator. Most often these will be data necessary to calculate the value of the numerical forms. For example, the indicator, "percentage of households that stored drinking water in protected containers between June 1 and July 30" would require data to construct the percentage; number of households that stored water in protected containers and the total number of households in the priority population. 3. For data needed, determine the desired category and form. The category can be one or a combination of the following; responses, descriptions, and documentation. Form can be open, closed, or modified-closed (see Handout 3.9 for a more detailed discussion of data categories and form). For the pieces of data from the water storage indicator, "number of households that stored water..." and "number of households participating in the project," the category for the first could be responses and documentation for the second. Responses could come from a questionnaire survey while documentation could be gotten from published census reports. 4. Specify the condition for the data. Conditions for data will either be identical or closely related to those of the indicator. Following the water storage example, the data have 2 different but related conditions; "households that store drinking water in protected containers" and "households participating in the project." 5. Express data in written form:

CATEGORY (FORM) + CONDITION Data needed for the water storage indicator would be written like this: Responses (closed form) of the number of households storing drinking water in protected containers AND documentation (closed form) of the number of households participating in the project.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 45

Page 49: M&E Workshop Handout Manual

Handout 3.11

Handout 3.11: INDICATORS AND DATA WORKSHEET

Listed in random order below are a question and its indicators and data. The task consists of two parts. First, identify the question and write it under the question column, identify the indicators write them beside the question under the indicator column; identify data for each indicator and write them next to the indicator in the data column. You will notice the data are incomplete. So for the second part of the task, determine and write the missing data. Use Handout 3.10 as a guide. _________________________________________________________________ Description (closed form) of number of health workers who take complete history of diarrhea. Percentage of health workers who took complete history of diarrhea between April and June, 1999. Responses (closed form) of number of health workers who counsel mothers. How effective was the ORT health worker last quarter. Proportion of health workers who managed diarrhea cases between April and June, 1999. Description (closed form) of number of health workers who counsel mothers. Documentation (closed form) of the number of health workers in the ORT program last quarter. Documentation (closed form) of the number of health workers who counseled mothers last quarter. QUESTION INDICATORS DATA NEEDED

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 46

Page 50: M&E Workshop Handout Manual

Handout 3.12

Handout 3.12: DEVELOPING M&E MATRICES

Effective and efficient monitoring and evaluation requires careful planning. The optimal time for developing an M&E plan is during the latter stages of project design. This plan can then be easily modified at key points in implementation, such as after a baseline survey or during preparation of specific evaluation plans. All to often, however, projects are implemented without M&E plans. Usually when it is too late, project managers suddenly realize that information is not being collected and used or information that is being collected is not relevant to decision-making. Logframes are useful tools to use as a starting point for developing M&E frameworks or matrices. Logframes in and of themselves, however, do not contain all the required information for a good M&E plan. Additional information needed includes:

How will the information needed be gathered? Who will collect it? When will it be collected?

How will the gathered information be analyzed? Who will analyze it? When will the

analysis be completed?

Who will receive the results? In what format will they be distributed? What decisions can be made?

Recognizing that the logframe does not contain all the useful information it is helpful (in fact, essential) to develop a framework which includes all the information needed for planning M&E. A useful way to do this is to develop a Monitoring and Evaluation Planning Matrix that expands on the logframe and specifically addresses M&E needs.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 47

Page 51: M&E Workshop Handout Manual

Handout 3.12

BASIC FRAMEWORK FOR A MONITORING AND EVALUATION SYSTEM PLAN Hierarchy

Indicators

Data Needed

Data Source and Method

Frequency of Collection

Person(s) Responsible

Data Analysis

Dissemination & Utilization

Final Goal

Intermediate Goals

Outputs

Activities

Inputs

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 48

Page 52: M&E Workshop Handout Manual

Handout 4.1

Handout 4.1: RESEARCH DESIGN FOR EVALUATION

Choices of groups to be measured: • Experimental group only • Experimental group and a true (randomly assigned) control group • Experimental group and a non-equivalent (not randomly assigned) control group Timing of measurements: • Pretest and posttest (baseline and final evaluation) • Posttest only (final evaluation only) • Time series (several measurements) Vocabulary and symbols for research design

Observation =

O

Intervention or program being tested = X

Alternative program (or no intervention) = C

Experimental (project) group = E>

Control (comparison) group = C>

Randomization = R

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 49

Page 53: M&E Workshop Handout Manual

Handout 4.2

Handout 4.2: EVALUATION RESEARCH DESIGNS

1. No control group No baseline

X

O

Baseline and final evaluation

O

X

O

2. With control group No baseline

E>

C>

X

O

O

Baseline and final evaluation

E>

C>

O

O

X

O

O

3. Time series Single group

OOO

X

OOO

Control group

OOO

OOO

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 50

Page 54: M&E Workshop Handout Manual

Handout 4.3

Handout 4.3: EVALUATION RESEARCH DESIGN ISSUES

PROBLEMS IN EVALUATION RESEARCH DESIGN WITH CONTROL GROUPS • External factors which change one group (E> or C>) but not the other, and which could

influence the outcomes. • Control group uses methods, materials, programs being tested on E-group. • People drop out of either E- or C- group for any reason. • Differences between E- and C- group in time spent on the program. CRITERIA FOR SELECTING APPROPRIATE DESIGN FOR PROJECT EVALUATION • Costs • Skills of staff to do surveys analyze results • Availability of staff • Project participants time and willingness to cooperate • Non- participants (control group's) time and willingness to cooperate • Ethics of using control groups • Accessibility to do a survey (security and other constraints) • Length of project • Need to prove attribution

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 51

Page 55: M&E Workshop Handout Manual

Handout 4.4

Handout 4.4: EXERCISE - EVALUATION DESIGN What evaluation designs were used to evaluate English programs in schools? EVALUATION DESIGN In School A teachers used visual aids for English courses, in School B they did not. The average score in the final test in School A was 79 and in School B it was 64.

In School A the average score in the pre-test was 63 and in the final English test it was 79.

In School A the average score in the final English test was 79.

In School A teachers used visual aids for English courses, in School B they did not. The average score in the pre-test was 63 in School A and 58 in School B. The average final test scores was 79 in School A and 64 in School B.

In School A test were administered on a regular basis (every week) in order to track the effect of the introduction of a range of teaching aids on students’ performance.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 52

Page 56: M&E Workshop Handout Manual

Handout 5.1

Handout 5.1: QUANTITATIVE DATA AND METHODS What is quantitative data? Quantitative data is data that can be analyzed using measures and techniques that can summarize and describe information into usable numbers (percentages, ratios, rates, mean, average, range). These summary and descriptive measures are also called statistics. The aim of statistics is INSIGHT and not numbers. Statistics can condense attitudes, knowledge and behavior of people in summary numbers that can be easily understood, remembered and used as a basis for making decisions, setting baselines and evaluating projects. While the aim of statistics is to help to make numbers more manageable, poor data or data of low quality can not be saved using statistics.

"He uses statistics like a drunken man uses a lamp post, more for support than for illumination."

Andrew Lang

Example of quantitative data sources • census • surveys • observation records • attendance numbers to health centers, schools, training, etc. • training pre- and posttests What is a quantitative survey? A survey is a method of collecting information directly from people about their feelings, motivations, plans, beliefs, behaviors and their background. Surveys are usually conducted using a questionnaire. Quantitative surveys use specially designed questionnaires for which the range of answers is known in advance. Quantitative surveys can be used for BASELINE and EVALUATION surveys which allow to quantify the IMPACT of a project. A quantitative survey • provides scale and scope of behaviors, attitudes, knowledge studied; • obtains precise, statistical answers to defined questions; • obtains quantifiable information which can be extrapolated, generalized; • can collect information on a large population giving precise estimates.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 53

Page 57: M&E Workshop Handout Manual

Handout 5.2

Handout 5.2: QUALITATIVE AND QUANTITATIVE METHODS

STRENGTHS AND WEAKNESSES OF QUALITATIVE AND QUANTITATIVE METHODS QUANTITATIVE QUALITATIVE Time Takes more time in data collection Takes less Cost-effectiveness Collection of data is more expensive,

higher yield in statistical data Cost is higher in analysis

Interview participation

Medium High

Flexibility of protocol

Strict High flexibility

Interviewer's skills and experience

Basic skills needed

Significant experience

Statistical basis

Valid and statistically reliable

Credible

Scope and scale Generalizable (i.e. 80% of young people ever heard about HIV/AIDS)

Inferential (we can infer that ....)

Type of information Broader, number based

Richer and more in depth

CRITERIA FOR SELECTING QUANTITATIVE OR QUALITATIVE METHODS

Quantitative methods Qualitative methods • Obtain precise, statistical answers to defined

questions • Collect information on a large population

giving precise estimates • Obtain quantifiable information which can be

extrapolated, generalized • Some information about the problem(s) and

issue(s) studied already available

• Obtain rich information and understanding of community life, peoples attitudes, opinions, beliefs and behaviors

• Explore attitudes • Research sensitive topics • Get "feel" for a problem

The best: a mixture of both quantitative and qualitative methods, as they provide different perspectives, they have different advantages and allow cross-checking of information.

QUALITATIVE → QUANTITATIVE → QUALITATIVE If there is no qualitative information available prior to a major quantitative survey, it is ESSENTIAL to conduct first qualitative research, which will allow to design a good survey questionnaire.

Provides information on the extent of the problems studied and guides which research to pursue.

Allows further and more in-depth insight and understanding about problems and issues found in quantitative study.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 54

Page 58: M&E Workshop Handout Manual

Handout 5.3

Handout 5.3: CONDUCTING QUANTITATIVE BASELINE AND EVALUATION SURVEYS

1. SURVEY PLANNING AND DESIGN • Prepare outline including survey objectives, research questions, target population,

coverage and reach of study, timeline and budget • Review existing information • Obtain permissions/buy-ins by key stakeholders and partners • Conduct preliminary qualitative assessment, obtain input from participants on issues,

questions to be asked, and categories of anticipated replies. 2. QUESTIONNAIRE DESIGN • Design draft questionnaire • Pre-test and review draft questionnaire • Finalize questionnaire • Draft tabulation plan 3. SAMPLING • Map target population and compile sampling frame • Design sample and decide on sample size • Select sample 4. TRAINING AND FIELDWORK • Select interviewers and supervisors • Train interviewers and supervisors • Complete fieldwork • Supervise fieldwork • Check and file questionnaires 5. DATA PROCESSING • Check forms • Code and edit questionnaires • Transfer data to computer • Clean and edit computerized data 6. DATA ANALYSIS AND REPORTING • Produce tables based on tabulation plan • Prepare charts and graphs • Study tables and draw conclusions from findings • Prepare draft and final reports 7. DISSEMINATION OF FINDINGS • Print and distribute appropriate reports for targeted audiences • Organize seminars, workshops, and discussions with project staff, beneficiaries,

partners, donors, and other stakeholders to communicate findings, get involvement in determining follow-up action plans

• Prepare action plan to carry out recommendations

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 55

Page 59: M&E Workshop Handout Manual

Handout 5.4

Handout 5.4: SURVEY PLANNING & DESIGN REVIEW EXISTING INFORMATION

It is of key importance to review existing information before conducting a survey in order to: • find out and learn what is already known about the issues and topics the survey aims to

address; • avoid duplication (don't waste time, effort and scarce resources!).

How? • Review all available and relevant information, including earlier studies and surveys,

papers, reports, policies and others • Discuss the research areas as widely as possible, both formally and informally. Contact

governmental and non-governmental organizations, community representatives, community groups, key informants

SURVEY OUTLINE Include in the survey outline: • Objectives and aims of the survey • Target population • Survey coverage

• Data collection • Data processing and analysis • Budget • Timeline

Weeks Tasks 1 3 4 5 6 7 Planning and design

Prepare survey outline and budget

Why?

• Methods (including sample design and size)

Example: Survey timeline

2 8

Review information

Obtain permissions Questionnaire design Draft questionnaire Pre-test draft questionnaire Review and finalize questionnaire Draft tabulation plan etc.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 56

Page 60: M&E Workshop Handout Manual

Handout 5.4

Budget items to consider for a survey: • Personnel Interviewers Supervisors Data entry clerks Data analysis Technical assistance (sampling, demography, etc.) Time of permanent staff • Materials and supplies • Training of interviewers and supervisors • Travel (including lodging and per diem) • Transportation (including vehicle rental, fuel) • Computer • Translations • Communications • Printing of report • Public presentation, meetings, workshop • Contingencies

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 57

Page 61: M&E Workshop Handout Manual

Handout 5.5

Handout 5.5: SAMPLING

What is a sample? A sample is a subset of the population that is used to gain information about the entire population. A good sample will represent the population well (REPRESENTATIVE SAMPLE). How well a sample represents the target population depends on the sampling frame, the sample size, and the sample design and selection procedures. Why sample?

Sampling is efficient Samples can be studied more quickly that target populations and are less expensive.

Sampling is precise Sampling helps to focus the survey on precisely the characteristics of interest. For example, if a study wants to compare rural communities and urban community, sampling strategies are available (in this case stratified sampling) to obtain what is needed.

Checklist for obtaining a sample that

represents the target population

Survey objectives are stated precisely Target population is clearly defined Rigorous sampling methods are chosen

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 58

Page 62: M&E Workshop Handout Manual

Handout 5.6

Handout 5.6: TARGET POPULATION Before choosing a sample, it has to be defined who is part of the target population for the survey. This choice will depend on the purpose of the study. Inclusion and exclusion criteria will have to be defined, that is, characteristics that include certain people and rule out certain others.

Example: Target population and inclusion criteria Research question:

What is the Contraceptive Prevalence Rate in Province X?

Target population: Women of reproductive age

Inclusion criteria: • Married • Between ages 15 and 49

Example: Target population and inclusion criteria Research question:

Are antenatal care clients satisfied with the care received in health centers?

Target population: Pregnant women

Inclusion criteria: • Pregnant in the 6 month period prior to the survey

• Visited health center at least once for antenatal care in the 6 month period prior to the survey.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 59

Page 63: M&E Workshop Handout Manual

Handout 5.7

Handout 5.7: SAMPLING FRAME Once the target population is defined, the next question is whether or not there are complete lists or maps of the target population from which a sample can be selected. Such list or map is called the SAMPLING FRAME. The sampling frame is the set of people that has a chance to be selected. A sample can only be representative of the population included in the sampling frame. One design issue is how well the sampling frame corresponds to the population the researcher wants to describe. Most sampling frames fall into 3 general classes: • Sampling is done from a more or less complete list of individuals in the population to be

studied. • Sampling is done from a set of people who go somewhere or do something that enables

them to be sampled (example: patients that received medical care in a health center, or people who attended a meeting). In this cases there are no advance lists from which sampling occurs. The creation of the list and the process of sampling occur simultaneously.

• Sampling is done in two or more stages, with the first stage involving selecting

something other than the individuals finally to be selected (example: village selected first, than individuals in selected villages).

Examples of sampling frames • a list of villages • a list of antenatal care clients • a list of government health staff trained in HIV/AIDS prevention • a village map showing individual dwellings • a list of household heads

There are three characteristics of a sampling frame that a researcher should evaluate: • Comprehensiveness A sample can only be representative of the sampling frame, that is, the population that actually had a chance to be selected. A key part of evaluating any sampling scheme is determining the percentage of the study population that has a chance of being selected and the extent to which those excluded are distinctive.

Examples of incomplete sampling frames • recent migrants not listed (out-of-date) • compiled in 1965 (out-of-date) • ethnic minorities not included (biased) • squatter population not included (biased)

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 60

Page 64: M&E Workshop Handout Manual

Handout 5.7

• Probability of selection Is it possible to calculate the probability of selection of each person sampled? For example, if a sample is drawn from clinic visit records over a 6 month period, it will give individuals who visited the doctor numerous times a higher chance of selection than those who saw the doctor only once. • Efficiency In some cases, sampling frames include units that are not among those that the researcher wants to sample. However this can be solved. For example, if a survey wants to interview women of reproductive age, a household sample can be selected and within the household all women of reproductive age can then be interviewed.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 61

Page 65: M&E Workshop Handout Manual

Handout 5.8

Handout 5.8: SAMPLING METHODS

Probability sampling

What is probability sampling? Probability samples have the characteristic that each unit in the target population has a known, non-zero probability of being included in the sample. Probability samples use RANDOM selection mechanisms. The following are the most commonly used random selection mechanisms: a. Simple random sampling • Obtain a list of individuals from which to select a sample. If the sample is to be

representative of the population, the sampling frame must include all or nearly all members of the population.

• Use a random number table and select individuals or "sampling units". Each individual

has the same chance of being selected from the list. Members of the target population are selected one at the time.

• Once members have been selected, they are not eligible for a second chance and they

are taken out of the sampling frame for the selection of subsequent members to include in the sample for this exercise.

b. Systematic sampling • Obtain a list of individuals or a map from which to select a sample. • Choose a random start, and then select every nth unit (i.e. every 8th or every 125th).

The random start is an essential component of the process. Without using a random start some members have zero probability of selection and it can not be considered a probability sample.

• If you have a population of 10,000 and you want to select a sample of 250 individuals,

the sample interval n would be 40 (10,000/250). After choosing a random start every 40th unit will be selected.

c. Stratified sampling A stratified random sample is one in which the population is divided into subgroups or "strata", and a random sample is then selected from each subgroup. • Divide the members of your target population into groups which are different in ways

which are significant to the issue being studied. Each member is assigned to one and only one group.

• Select independent random samples for each of the groups, using simple random or

systematic sampling.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 62

Page 66: M&E Workshop Handout Manual

Handout 5.8

Examples of criteria for dividing the population into subgroups • groups by residence: 1. urban

2. semi-urban 3. rural

• groups by marital status: 1. single 2. married 3. divorced or widowed

d. Cluster/multistage sampling What is a cluster? A cluster is a naturally occurring unit such as a school, a village, a hospital. Cluster sampling is usually used for large surveys. For selecting a cluster sample: • Obtain or compile a list of clusters (i.e. list of schools). • Select randomly a certain number of clusters (i.e. 10 out of 100 schools). • Include all members of the selected cluster in the sample (i.e. all school teachers in the 10 schools). Multistage sampling is an extension of cluster sampling. After selecting randomly clusters the next step is the following: • Select a sample from the cluster using simple random or systematic sampling.

PROBABILITY SAMPLING METHODS Sampling Method

Description Advantages Disadvantages

Simple random

Each member of the study population has an equal probability of being selected.

• Simple, self-weighting.

• Lists (sampling frames) may not be available or incomplete.

• Members of a sub-group of interest may not be included in appropriate proportions.

• Samples may be very dispersed.

Systematic

Each member of the study population is listed/mapped, a random start is designated, the members of the population are selected at equal intervals (sampling interval).

• Simpler and faster. • Lists (sampling frames) may not be available or incomplete.

• Must watch for recurring patterns within the sampling frame (example: lists arranged by age, sex).

• Samples may be very dispersed.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 63

Page 67: M&E Workshop Handout Manual

Handout 5.8

Stratified

Each member of the study population is assigned to a group or stratum according to meaningful characteristics, then a simple random or syste-matic sample is selected in each group.

• Allows sub-population analysis

• Sample is more likely to reflect the population, improves efficiency.

• Must calculate sample size for each subgroup, increasing sample size, cost and time.

• Weighting needed.

Cluster Each member of the study population is assigned to a cluster, then clusters are selected randomly and all members of the selected cluster are included in the sample.

• Does not require listing of full population.

• Less geographical spread of sampling units and therefore saves time and money.

• Increases sampling error.

• Clusters may not be representative

Multistage Clusters are selected as in cluster sampling, then sample members are selected within each cluster by simple random sampling. Clustering may be done at more than one stage.

• Does not require listing.

• Less geographical spread of sampling units and therefore saves time and money.

• Increases sampling error.

Example of Simple Random Sampling 1 11 21 31 41 51 61 71 81 91

2 12 22 32 42 52 62 72 82 92

3 13 23 33 43 53 63 73 83 93

4 14 24 34 44 54 64 74 84 94

5 15 25 35 45 55 65 75 85 95

6 16 26 36 46 56 66 76 86 96

7 17 27 37 47 57 67 77 87 97

8 18 28 38 48 58 68 78 88 98

9 19 29 39 49 59 69 79 89 99

10 20 30 40 50 60 70 80 90 100

Sample Frame: 100 units Sample Size: 9 units (9%) Each sample randomly selected In this example units selected were: 15, 51, 95, 2, 55, 24, 25, 4, 92

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 64

Page 68: M&E Workshop Handout Manual

Handout 5.8

Example of Systematic Sampling 1 11 21 31 41 51 61 71 81 91

2 12 22 32 42 52 62 72 82 92

3 13 23 33 43 53 63 73 83 93

4 14 24 34 44 54 64 74 84 94

5 15 25 35 45 55 65 75 85 95

6 16 26 36 46 56 66 76 86 96

7 17 27 37 47 57 67 77 87 97

8 18 28 38 48 58 68 78 88 98

9 19 29 39 49 59 69 79 89 99

10 20 30 40 50 60 70 80 90 100

Sample Frame: 100 units Sample Size: 9 units (9%) Formula:100/(9-1) = 12 (appx) Select every 12th unit, beginning with randomly selected number between 1 and 12. In this example start number randomly selected was 4. Thus units selected were 4, 16, 28, 40, 52, 64, 76, 88, 100

Example of Stratified Random Sampling A1 A6 A11 A16 A21 B1 B6 B11 B16 B21

A2 A7 A12 A17 A22 B2 B7 B12 B17 B22

A3 A8 A13 A18 A23 B3 B8 B13 B18 B23

A4 A9 A14 A19 A24 B4 B9 B14 B19 B24

A5 A10 A15 A20 A25 B5 B10 B15 B20 B25

C1 C6 C11 C16 C21 D1 D6 D11 D16 D21

C2 C7 C12 C17 C22 D2 D7 D12 D17 D22

C3 C8 C13 C18 C23 D3 D8 D13 D18 D23

C4 C9 C14 C19 C24 D4 D9 D14 D19 D24

C5 C10 C15 C20 C25 D5 D10 D15 D20 D25

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 65

Page 69: M&E Workshop Handout Manual

Handout 5.8

Sample Frame: 4 Strata with 25 units in each Sample Size: 3 units per strata (12%) Three units each randomly selected within each strata. In this example units selected were: A23, A2, A25; B12, B20, B1; C13, C8, C18; D5, D25, D21

Example of Cluster Sampling

Cluster W Cluster Y 1 11 21 31 41 51 61 71 81 91 2 12 22 32 42 52 62 72 82 92 3 13 23 33 43 53 63 73 83 93 4 14 24 34 44 54 64 74 84 94 5 15 25 35 45 55 65 75 85 95 6 16 26 36 46 56 66 76 86 96 7 17 27 37 47 57 67 77 87 97 8 18 28 38 48 58 68 78 88 98 9 19 29 39 49 59 69 79 89 99 10 20 30 40 50 60 70 80 90 ##

Cluster X Cluster Z Sample Frame of clusters: list of 4 Sample Size: 9 units (9% of that cluster)

Example of Multistage Sampling Cluster W Cluster Y C1 C6 C11 C16 C21 D1 D6 D11 D16 D21 C2 C7 C12 C17 C22 D2 D7 D12 D17 D22 C3 C8 C13 C18 C23 D3 D8 D13 D18 D23 C4 C9 C14 C19 C24 D4 D9 D14 D19 D24 C5 C10 C15 C20 C25 D5 D10 D15 D20 D25

Cluster X Cluster Z One cluster randomly selected out of 4 clustersWithin Cluster "W" 2 sub-clusters randomly selected Within these sub-clusters: Sample frame 50 (25 in each) (Note: lists of units needed only for these two clusters) Sample size: 6 each; total 12 units sampled (24% of units within these two sub-clusters)

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 66

Page 70: M&E Workshop Handout Manual

Handout 5.8

Non-probability sampling

What is non-probability sampling? Non-probability sampling includes several sampling approaches where subjective judgments play a role in the selection of the sample. No randomized method is used. Non-probability samples are not representative of the target population. When are non-probability samples useful? • Exploratory research or pilot surveys • Surveys of special/specific populations (i.e. traditional medicine users) • Surveys of hard-to-identify groups (i.e. drug users) NON-PROBABILITY SAMPLING METHODS

Method Description

Convenience sampling

Select cases based on their availability for the study.

Most similar/ dissimilar cases

Select cases that are judged to represent similar conditions or, alternatively, very different conditions.

Typical cases Select cases that are known beforehand to be useful and not to be extreme. Critical cases Select cases that are key or essential for overall acceptance or assessment. Snowball sampling Respondents identify additional members to be included in the sample. Quota sampling

Interviewers select sample that yields the same proportions as the population proportions on easily identified variables.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 67

Page 71: M&E Workshop Handout Manual

Handout 5.9

Handout 5.9: SAMPLE SIZE Factors to consider in the choice of a sample size: • precision or degree of accuracy (tolerable error), • confidence level (usually 95%, meaning that we can be 95% confident that the estimate

will fluctuate as much as ± tolerable error, that is, that 95 out of 100 samples will be representative of the population);

• variance or standard deviation of main variable(s) studied (if not available: use "worst

case scenario" p=0.5); • resources and time available; • skills of staff.

The size of the population from which a sample is selected has virtually no impact on how well that sample is likely to describe the population. For example, a sample of 150 people will describe a population of 15,000 or 15 million with virtually the same degree of accuracy, assuming that all other aspects of the sample design and sampling procedures were the same.

Sample size for simple random sampling for 95% confidence level, (binominal distributions)

n = (z/standard error)² (p) (1-p) n = sample size z = standard score corresponding to a given confidence level

(z = 1.96 for the 95% confidence level) p = expected proportion with the characteristic (1-p) = expected proportion without the characteristic

Example: Simple random sampling size calculation for 95% confidence level The CARE Cambodia team planning a health survey in Pursat does not have any estimate on the contraceptive use in Pursat, which they want to study. They therefore assume a 50% - 50% distribution or the "worst case scenario" (p=0.5) They want the results to have a maximum of ± 7 standard error in the 95% confidence level. n = (1.96/0.07)² (0.5) (1-0.5) n = 196 The sample size required is of 196 women of reproductive age. TIP: If population is very small (i.e. 30 individuals), do not use a sample.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 68

Page 72: M&E Workshop Handout Manual

Handout 5.9

Sample size and standard errors for simple random sampling for the 95% confidence level.

Sample Binominal percentage distribution size 50/50 60/40 70/30 80/20 90/10

100 10.0 9.8 9.2 8.0 6.0200 7.1 6.9 6.5 5.7 4.2300 5.8 5.7 5.3 4.6 3.5400 5.0 4.9 4.6 4.0 3.0500 4.5 4.4 4.1 3.6 2.7600 4.1 4.0 3.7 3.3 2.4700 3.8 3.7 3.5 3.0 2.3800 3.5 3.5 3.3 2.8 2.1900 3.3 3.3 3.1 2.7 2.0

1,000 3.2 3.1 3.0 2.5 1.91,100 3.0 3.0 2.8 2.4 1.81,200 2.9 2.8 2.6 2.3 1.71,300 2.8 2.7 2.5 2.2 1.71,400 2.7 2.6 2.4 2.1 1.61,500 2.6 2.5 2.4 2.1 1.51,600 2.5 2.4 2.3 2.0 1.51,700 2.4 2.4 2.2 1.9 1.41,800 2.4 2.3 2.2 1.9 1.41,900 2.3 2.2 2.1 1.8 1.32,000

2.2 2.2 2.0 1.8 1.3

Adjustments for design effects*

Sampling method

Adjustment range

Stratified sampling 0.50 to 0.95 Cluster sampling 1.50 to 3.00 Multistage sampling 1.25 to 1.50

* Based on world-wide survey experience As can be observed in the above table, the design effects for cluster sampling vary from 1.5 to 3, which of course has an important implication for sample size, cost and time needed to complete the fieldwork. For cluster sampling the exact design effect depends on the number, size and homogeneity of clusters. As a general rule, it is better to have a larger number of small sized clusters than a small number of large sized clusters. Example: Adjustment for design effect The CARE Cambodia team decided to use a stratified cluster sampling design and the sample size of 196 has to be adjusted, following the advice of an expert, by 1.3. n = 196 x 1.3 n = 255

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 69

Page 73: M&E Workshop Handout Manual

Handout 5.9

Adjustments for non-response Most common reasons for non-response: • inability to contact the respondent (i.e. respondent not at home at the time of the survey) • inability of respondent to complete the interview (i.e. respondent is ill, interviewer does

not speak respondent's language) • refusal of respondents to answer to answer to the survey questionnaire There are certain techniques to minimize non-response, which will be discussed in other sections. Non-response has implications for the sample size calculation. If, for example, a response rate of 90% is expected, than the sample size will have to be adjusted by 1.10. Example: Adjustment for non-response The CARE Cambodia team adjusted the sample for non-response, expecting that about 10% of the selected sample of the target population would not be found at the time of the survey. n = 255 x 1.10 n = 281 Sampling error

Standard error for 95% confidence level. _________ SE = 2 x √(p x (1-p) /n

SE = standard error of the mean

p = proportion with the characteristic

(1-p) = proportion without the characteristic

n = sample size

Example: Standard error calculation The CARE Cambodia team found in their survey that the contraceptive prevalence rate in Pursat was of 17%. The actual number of respondents to the survey questionnaire was of 248. ______________ SE = 2 x √0.17 (1-0.17) /248 _________ SE = 2 x √0.1411/248 _________ SE = 2 x √0.0005689 SE = 4.7 The survey had initially calculated a sample size and error based on the "worst case scenario". Now that the results are known, the error estimation for current contraceptive use is of ± 4.7. This means that the CARE Cambodia team can be 95% sure that the current contraceptive use in Pursat province lies between 12.3 and 21.7.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 70

Page 74: M&E Workshop Handout Manual

Handout 5.9

Non-sampling errors • Imprecision in the definition of the target and study population • Errors in survey design • Non-response • Measurement errors (i.e. poorly worded questions and response choices, inadequately

trained interviewers) • Errors in data processing

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 71

Page 75: M&E Workshop Handout Manual

Handout 5.10

Handout 5.10: EXERCISE - TARGET POPULATION AND SAMPLING FRAME

TARGET POPULATION Define who is part of the target population for the survey

Research question:

Target population:

Inclusion criteria:

From where could you select a sample of the target population? Discuss which sampling frames could be used for the your survey.

Exercise: Sample size and sampling error Simple random sampling size calculation for 95% confidence level The CARE Cambodia team planning a health survey in Pursat province does not have any estimate on the contraceptive use in Pursat, which they want to study. They therefore assume a 50% - 50% distribution or the "worst case scenario" (p=0.5) They want the results to have a maximum of ± 6 standard error in the 95% confidence level. Sample size: n = Adjustment for design effect The CARE Cambodia team decided to use a stratified cluster sampling design. A sampling expert advised to increase the simple random sampling sample size by 1.3. Sample size adjusted for design effect: n = Adjustment for non-response The CARE Cambodia team adjusted the sample for non-response, expecting that about 10% of the selected sample of the target population would not be found at the time of the survey. Sample size adjusted for non-response: n = Standard error calculation The CARE Cambodia team found in their survey that the contraceptive prevalence rate in Pursat was of 17%. The actual number of respondents to the survey questionnaire was of 362. Standard error =

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 72

Page 76: M&E Workshop Handout Manual

Handout 5.11

Hanout 5.11: QUESTIONNAIRE DESIGN

DESIGN OF QUESTIONNAIRES FOR STRUCTURED INTERVIEWS Data collection instruments are the "thermometers" used to assess facts, knowledge, attitudes, behaviors, perceptions, opinions. It is advised to follow the these steps for designing instruments: 1. Review previous instruments used for similar studies, conduct focus group and/or key

informant interviews with representatives of the target population to get agreement on purpose for and basic design of survey.

2. Brainstorm a first draft instrument 3. Get comments from others and reduce the number of questions to a minimum set

needed to measure indicators 4. Review the first draft and prepare second draft 5. Pre-test the instrument (if applicable) for : • sequence of questions/flow • comprehension of questions • appropriateness of questions • coding to fit responses • timing • skip-pattern 6. Review second draft based on the pre-test findings an prepare third draft 7. (If applicable) translate the instrument from the language it was designed to the

language for interviewing and back by two independent translators. Designing questions to be good measures Good questions are: • reliable (providing consistent measures in comparable situations); • valid (answers correspond to what they are intended to measure).

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 73

Page 77: M&E Workshop Handout Manual

Handout 5.11

TIPS FOR DESIGNING A QUESTIONNAIRE • Use simple language. • Avoid asking for more than one piece of information in a single question. (Bad question:

"Were you pleased with the cleanliness and the hours of service of the clinic?") • Avoid emotionally charged questions. (Bad question: "Do you think that condoms are

only for husbands who cheat their wives?") • Avoid superficial questions that provoke near-unanimous agreement. (Bad question: "Do

you think that the interest rate charged by the credit program should be lower?"). • Do not assume knowledge or agreement. (Before asking "How many times did you go to

the health center?" ask "Did you ever go to the health center?). • Limit questions to respondents' own knowledge, attitudes and practice. • Pre-code responses. • Pre-test on a population similar to the target population!

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 74

Page 78: M&E Workshop Handout Manual

Handout 5.12

Handout 5.12: TYPE OF QUESTIONS

1. Open and closed-ended questions Closed-ended questions have set responses, which allow easier data processing. Open-ended questions have space to record exact quotes or paraphrase the respondent. They give respondents the opportunity to state a position in their own words. While structured interviews should used closed questions as much as possible, open questions are very useful for exploratory or pre-testing work, when the full range of possible answers is not known. Structured interviews can contain some open questions, however this means that responses will have to be coded prior to data entry. It is important to then consider the time that will be needed for analysis. Example: Open-ended questions

Question Codes What did you like most about your visit to the health center?

________________________________________________________________

And what did you like least?

________________________________________________________________

Example: Closed-ended question

Question Codes Did your child have diarrhea in the last two weeks? (IF YES:) How many times?

No diarrhea episode in past 2 weeks...........11 time.........................................................22 times.......................................................3

More than 2 times.......................................4 2. Answer lists The answer list is a way of measuring strength of feeling among a standard range of possible answers. Respondents choose which answer(s) they agree with. Example: Answer list

No Question Codes 1. Which of the following sicknesses do you consider the most

important problem for your children? (READ OUT) Meningitis?

Malaria? Diarrhea?

Respiratory infections? Skin infections?

Measles? Other? (specify: _______)

YES......................................NO1...............................................21...............................................21...............................................21...............................................21...............................................21...............................................21...............................................2

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 75

Page 79: M&E Workshop Handout Manual

Handout 5.12

Example: No answer list (spontaneous response) No Question Codes

1. Which sicknesses do you consider the most important problem for your children?

Meningitis.................................1Malaria.....................................2Diarrhea...................................3Respiratory infections...............4Skin infections..........................5

Measles.................................6Other (specify: _______)........7

The advantage of answer list is, that respondents often forget to mention in the short time they have available in an interview to mention all important answers. Answer lists have disadvantages, it can happen that respondents that do not understand them answer at random and may not give their true answer if it is not listed. Even with an "other" category, items not on the list are less likely to be mentioned. Lists are most effective when respondents can read and look back over the items. When a respondent can not read, so that the list has to be read aloud, they may not consider all the alternatives. 3. Measuring attitudes It is harder to collect reliable data on attitudes than on more factual matters. It is very important to treat such topics with caution. It is advisable to explore attitudes through in-depth case studies, interviews and focus groups. Even well designed questionnaires can only scratch the surface of what people really think, perceive, feel. One option for collecting data on attitudes is the use of scales. Scaling means constructing an ordered list of opinions or attitudes. Respondents select the statements they agree with: Example: Measuring attitudes No Question Codes

1. In general, do you approve or disapprove of couples using a method to avoid getting pregnant?

APPROVE..............................1DISAPPROVE.........................2DK..........................................8

It is important that interviewers always ask the questions in the same way, keeping the exact wording. Changes could make the questions useless. 4. Measuring intentions To find out about intentions rather than opinions, a time scale can be used. Example: Measuring intentions

Question Codes Do you want to use a contraceptive method in the future? (IF YES:) When do you plan to start using a contraceptive method?

Within one month.....................11 month to 5 months.................26 months to 1 year....................3More than one year...................4Undecided................................5Does not plan to use.................6

Other: _______________......7

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 76

Page 80: M&E Workshop Handout Manual

Handout 5.13

Handout 5.13: QUESTIONNAIRE LAYOUT, LENGTH AND CODING

Introducing the survey Before asking the first question, interviewers should give respondents some information. Include in the introduction the name of the organization conducting the survey, the name of the interviewer, the purpose of the survey and how results will be used. Avoid making false promises or comments that will raise expectations. Order of questions • Start with questions that are easy to answer. This will help to establish trust and reassure the respondent. It is better to leave questions about attitudes, beliefs and intentions to the later stages of the interview. • Work from the particular to the general. For example, in a question about water use questions about the source of water supply would come first, and questions like "Overall, are you satisfied with the place you collect your water?" would come towards the end of the interview. Questionnaire length Even when the research topic demands a long interview, do not expect to give people more that 30 to 45 minutes of their time. Long interviews increase the risk of error. Advantages of short interviews • Less mistakes • Respondents give usually more meaningful and "true" answers and are more willing to

collaborate with the survey • The sample size can be increased • Analysis is simpler Codes Coding means to give a number (or other symbol) to each possible answer. Codes are useful in order to summarize a large amount of information. They are essential, even for small samples, when data is to be analyzed by computers. Closed-ended questions are pre-coded, open-ended questions have to be coded once forms are returned to the office. For open-ended questions clear coding instructions are needed. Always include an "other" and "no response" and "don't know" (if it applies) category in order to classify all possible responses Skips and filters Think about the following question. What is wrong with it?

Have you ever borrowed any cash? If yes, from whom did you borrow money and how much the last time you did so?

This question aims at finding out too many things at the same time, it can be divided up into 3 clear and precise questions. Also, those who never borrowed money do not need to be asked how much they borrowed, etc. This can be solved using a SKIP.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 77

Page 81: M&E Workshop Handout Manual

Handout 5.13

Example: Skips No. Question Coding Skip

1. Have you ever borrowed cash? Yes.................................1 No...................................2

4

2. Lets talk about the last time you borrowed cash. From whom did you borrow it?

Bank................................1 Money lender...................2 Relative or friend.............3 Other: ____________.....4

3. What amount did you borrow at that time? (Record in local currency or in US$)

Local currency __ __ __ __

US$ __ __ __ __

4. Have you ever borrowed anything else besides cash?

Yes..................................1 No...................................2

10

5. What did you borrow? (Check all that apply)

Food................................1 Farm inputs.....................2 Farm tools.......................3

Other:___________.........4

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 78

Page 82: M&E Workshop Handout Manual

Handout 5.14

Handout 5.14: EXAMPLE - QUESTIONNAIRE WITH SEVERAL TYPES OF QUESTIONS

Birth Spacing Campaign Questionnaire (Introduction: Explain that you work for CARE and you are conducting a survey about health in several provinces. Ask to speak to a married man or women, aged 15 to 44. If none in the household or not available DO NOT CONDUCT THE INTERVIEW). A. Identification of questionnaire No. Question Coding

1. Questionnaire number

__ __ __

3. Rural or urban Urban..............................1 Rural...............................2

4. Record gender Female............................1 Male................................2

B. Interview No. Question Coding Skip

5. How old are you? __ __

END IF <15 or >44

6. Are you married? Yes.................................1 No..................................2

END

7. Have you ever attended school? (If yes:) What is the highest level of school you attended?

Never attended................1 Primary incomplete..........2 Primary complete............3 Secondary incomplete.....4 Secondary complete+......5

8. Does your household have............. .....a radio?........a TV?

Yes...............................No 1....................................2 1....................................2

9. How often do you listen to the radio? Daily................................1 Weekly............................2 Less than weekly.............3 Never..............................4

10. How often do you watch TV?

Daily................................1 Weekly............................2 Less than weekly.............3 Never..............................4

11. Do you have children? (If yes:) How many children do you have?

__ __

12. How old is your youngest child? __ __

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 79

Page 83: M&E Workshop Handout Manual

Handout 5.14

No. Question Coding Skip

14. Have you ever heard about any method to delay or avoid a pregnancy? (If yes:) Have you heard about the........

...........daily pill?

......monthly pill?..........injection?.................IUD?..........condom?

............female sterilization?..............male sterilization?

....periodic abstinence/rhythm?..................withdrawal?

......other: __________?

Yes...............................No 1....................................2 1....................................2 1....................................2 1....................................2 1....................................2 1....................................2 1....................................2 1....................................2 1....................................2 1....................................2

If respon-dent does not know any method: END

15. Do you think that couples who want no more children or who want to wait longer time until having a baby should use a birth spacing method or not?

Yes..................................1 No...................................2

Not sure...........................3

16. Have you or your wife/husband ever used a birth spacing method? Which one(s)?

Daily pill...........................1 Monthly pill......................2 Injection...........................3 IUD..................................4 Condom...........................5 Female sterilization.........6 Male sterilization..............7 Periodic abst./rhythm.......8 Withdrawal......................9

Other: __________........10 Never used....................11

19 17. Are you or your wife/husband currently using any

birth spacing method? Which one? Daily pill..........................1 Monthly pill......................2 Injection...........................3 IUD..................................4 Condom...........................5 Female sterilization.........6 Male sterilization..............7 Periodic abst./rhythm.......8 Withdrawal......................9

Other: __________........10 Not using.......................11

19 18. When did you start the use of the method you

are using now? ___ /19__ __

19.

From where/whom did you first hear about birth spacing?

Radio or TV.....................1 Friends/relatives..............2 Health staff......................3 Other: ____________.....4

Thank you very much for your time and cooperation!

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 80

Page 84: M&E Workshop Handout Manual

Handout 5.15

Handout 5.15: TWELVE GUIDELINES FOR DEVELOPING

GOOD QUESTIONS 1. AVOID LOADED OR EMOTIONALLY CHARGED WORDS. (e.g. Do you agree with

the fanatical opinion of government regarding the marketing of agricultural surplus?) 2. AVOID SUPERFICIAL, PAT QUESTIONS THAT ENCOURAGE STEREOTYPICAL,

UNIFORM RESPONSES. (e.g. Do you like the extension services?) 3. AVOID DOUBLE-BARRELED QUESTIONS. (e.g. Do you attend meetings of the

cooperative and take loans from it?) 4. AVOID QUESTIONS THAT PRESUME KNOWLEDGE, EXPERIENCES OR PAST

PRACTICES. (e.g. Are you continuing to use pesticides?) 5. AVOID ESOTERIC OR TECHNICAL WORDS. (e.g. What should you do when your

child has an acute upper respiratory infection?) 6. AVOID NEGATIVELY WORDED QUESTIONS. (e.g. Are you not now leaving your

water storage containers uncovered?) 7. AVOID CHARACTERIZING THE ISSUE IN A WAY AS TO PREJUDICE THE

RESPONSE. (e.g. Which do you prefer, the improved extension service or the old extension service?)

8. AVOID SWEEPING QUESTIONS. (e.g. Are you in favor of modern farming

practices?) 9. AVOID QUESTIONS THAT ARE AMBIGUOUS. (e.g. When did you first become

interested in planning your family?) 10. AVOID UNNECESSARILY COMPLEX QUESTIONS. (e.g. Would you or another

member of your family be interested in attending a meeting of potential entrepreneurs to learn about bookkeeping methods that can be used for small businesses and how to gain access to credit through cooperative methods?)

11. AVOID QUESTIONS THAT MAY REQUIRE GUESSING. (e.g. How much money did

you spend on contraceptives last year?) 12. AVOID QUESTIONS THAT DO NOT ADEQUATELY DEFINE THE EXTENT OF

DETAIL OR THE DEGREE OF THOROUGHNESS OF THE DESIRED ANSWER. (e.g. What are some of the things about the project that you like?)

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 81

Page 85: M&E Workshop Handout Manual

Handout 5.16

Handout 5.16: DATA COLLECTION - TRAINING AND FIELDWORK

Interviewers and supervisors play a major role in surveys. Their skills are essential for obtaining GOOD QUALITY DATA and therefore it is of key importance to select and train interviewers and supervisors well. Different interviewer skills and experience are needed for quantitative surveys than for in-depth interviews, focus groups or other qualitative methods. FIELD RESEARCH FOR QUANTITATIVE STUDIES

THE INTERVIEWER

1. Role of the interviewer The interviewer occupies the central position in surveys, as she/he is the one who collects information from the respondents. Therefore, the success of the survey depends on the quality of each interviewer's work. The responsibilities of an interviewer include: • to locate and enlist cooperation of selected respondents; • to ask questions, record answers, and probe incomplete answers to ensure that

responses meet the question objectives; • to check completed interviews to be sure that all questions were asked and the

responses legibly recorded. 2. Training and supervision of interviewers Training of interviewers generally consists of a combination of "classroom" training and practical experience. Training is a continuous process. Observation and supervision throughout the fieldwork are part of the training and data collection process. Supervisors play a very important role in continuing interviewer's training and ensuring the quality of the survey. 3. Conducting an interview Successful interviewing is an art and should not be treated as a mechanical process. Each interview is a new source of information. It is very important to make the interview interesting and pleasant. The art of interviewing develops with practice, but there are certain basic principles which are followed by every interviewer in order to be successful:

Building rapport with the respondent • Make a good first impression • Always have a positive approach

Do not use words such as "Are you too busy?", "Do you have a few minutes?" or "Would you mind answering some questions?". Such questions invite refusal before you start. Rather, tell the respondent: "I would like to ask you a few questions" or "I would like to talk to you for a few moments".

• Stress confidentiality of responses when necessary

If the respondent is hesitant about responding to the interview or asks what the information will be used for, explain that the information you collect will remain confidential, no individual names will be used. Also, you should NEVER mention other

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 82

Page 86: M&E Workshop Handout Manual

Handout 5.16

interviews or show completed questionnaires to other interviewers or supervisors in front of a respondent or any other person.

• Answer any question from the respondent frankly Before agreeing to be interviewed, the respondent may ask you some questions about the survey or how she/he was selected to be interviewed. Be direct and pleasant with your answer.

• Conduct interviews privately The presence of a third person during the interview can keep the respondent from giving frank, honest answers. It is, therefore, very important that individual interviews be conducted privately and that all questions are answered by the respondent her/himself.

Tips in conducting an interview

• Be neutral throughout the interview

Most people are polite and will tend to give answers that they think you want to hear. It is therefore very important that you remain absolutely neutral as you ask the questions. Never, either by expression on your face or by the tone of your voice, allow the respondent to think that she has given the "right" or "wrong" answer to a question. Never appear to approve or disapprove of any of the respondent s replies.

• Never suggest answers to the respondent

If a respondent's answer is not relevant to a question, do not suggest her an answer by saying something like : "I suppose you mean that .... is that right?" Rather, you should probe in such way that the respondent her/himself comes up with the relevant answer.

• Do not change the wording or sequence of questions

The wording of the questions and their sequence in the questionnaire must be maintained. If the respondent has misunderstood the question, you should repeat the question slowly and clearly. Provide only the minimum information required to get an appropriate response.

• Be patient with hesitant respondents There will be situations where the respondent simply says "I don't know", gives an irrelevant answer, acts very bored or detached, contradicts something she/he has already said, or refuses to answer the question. In this cases you must try to re-interest her/him in the conversation. Spend a few moments talking about things unrelated to the interview (for example, about the town or village, the weather, etc.).

• Do not form expectations You must not form expectations as to the ability and knowledge of the respondent. Do not assume, for example, that women from rural areas or those who are less educated or illiterate do not know about family planning.

• Do not hurry the interview Ask the questions slowly to ensure the respondents understands what they are being asked and give them time to answer questions.

• Do not make any promises Make no promises like "CARE is planning to construct wells in your village" or similar.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 83

Page 87: M&E Workshop Handout Manual

Handout 5.16

4. Fieldwork Field procedures and problems Field work should proceed according to a time schedule, and it is important that each interviewer understands and follows field procedures and how to handle various problems that may be experienced during the field work. In most quantitative surveys, clear instructions have to be made for the following procedures and problems: a. Locating sample households b. Problems in locating a household • No one at home at time of the call • Assigned household inaccessible • The house is all closed up and neighbors say that no one lives there • The dwelling is non-residential • The dwelling is a building with several apartments c. Identifying and interviewing eligible respondents • No eligible respondents • Eligible respondent not available • Respondent refuses to be interviewed • Interview not completed Checking completed questionnaires It is responsibility of the interviewer to review each questionnaire when the interview is finished. Interviewers have to make sure that every appropriate question was asked, that all answers are clear and reasonable, and that the skip instructions were followed correctly. NEVER RECOPY QUESTIONNAIRES, this increases the chance of mistakes.

THE SUPERVISOR 1. The role of the supervisor The main responsibility of the field supervisor is TO REDUCE NON-RESPONSE. Some other important responsibilities are (not all apply to all surveys): • to direct and supervise the work of team members; • to check if the selection of households has been done according to the method of

selection developed for the survey; • to observe several interviews and discuss with the interviewer's about their work after

each survey day; • to advise interviewers if problems arise; • to contact local authorities just before the survey takes place in the areas and to express

thankfulness before leaving; • to do a simple mapping of the villages; • to make well a publicity campaign, explain about purposes and meanings of the survey

to the people • to keep well all the questionnaires, properties and supplies provided 2. Reducing non-response One of the most important duties of the supervisor is to try to minimize non-response. There are mainly three types of non-responses:

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 84

Page 88: M&E Workshop Handout Manual

Handout 5.16

Type 1: The interviewer is unable to locate the selected household. Type 2: The interviewer is unable to locate the eligible respondents. Type 3: The respondent refuses to be interviewed. 3. Monitoring interviewer performance • Systematic spot-checking of household selection and/or composition • Observing interviewers • Evaluating interviewer performance. This may include returning to a small sample of

households and asking a few key questions to verify that the questionnaire form was correctly recorded.

The Art of Questioning: An Exercise to Increase Awareness of How Questions Are Asked A CARE employee has been requested to travel to a village and interview a group of farmers to learn about their rice production practices. Specifically, CARE wants to know more about the use of inputs (in particular improved seed and fertilizer) and the amount and use of marketable surplus which leads to increased household income. Listen to the questions the interviewer asks the farmers. After each question the group will discuss the characteristics of the question and note if there are faults with the way the question was formulated. If there are faults then corrections will be made by the group. Hello, I’m from CARE and I would like to ask you some questions about your rice production... Fault: Corrections: I’m interested in details of your rice production activities. First, do you use inputs? Fault: Corrections: Okay, could you describe in general terms the cycle of rice production in this area? Fault: Correction: Are you continuing to use pesticides? Fault: Correction:

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 85

Page 89: M&E Workshop Handout Manual

Handout 5.16

I don’t think use of pesticides is environmentally responsible, would you agree? Fault: Corrections: Do you attend meetings of the farmers association and take loans from it to purchase inputs? Fault: Correction: What has been the extent of your use of di-dedra tetracycline (DDT) as a pesticide for corn stem borer? Fault: Corrections: Are you not now using improved rice seed? Fault: Correction: Do you think that if you use more pesticides you production of rice will increase? Fault: Correction: What should be the duties of an agricultural extension worker? Fault: Has the CARE project been successful? Fault: Correction: What would be the effect on you if the cost of pesticides were to rise? Fault:

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 86

Page 90: M&E Workshop Handout Manual

Handout 5.17

Handout 5.17: QUANTITATIVE DATA ANALYSIS

1. Ways of looking at data • Time series record developments over time, for example, monthly attendance to a

health center. • Cross-sectional data are snapshots which capture a situation at a moment in time,

such as the percentage of farmers that use IPM (integrated pest management at a given moment of time.

2. Nominal, ordinal and numerical scales and data Nominal or categorical scales and data Nominal or categorical data identify classifications. The answer is the "name " of the category into which the data fit. The numbers are arbitrary and have no inherent value. Example: Nominal scales that produce nominal data 1. Circle respondents gender 2. Circle respondent's water source

Female..................................1 Male......................................2

Well......................................1Piped water...........................2River or pond........................3

Other:______________.........4 Ordinal scales and data Categories can be sorted into a meaningful order, but differences between ranks are not necessarily equal. Example of ordinal scale that produces ordinal data Have you ever attended school? IF YES: What is the highest level of school you attended?

NEVER ATTENDED...................1 PRIMARY INCOMPLETE...........2 PRIMARY COMPLETE..............3 SECONDARY...........................4 UNIVERSITY.............................5

Numerical scales and data When differences between numbers have a meaning on a numerical scale, they are called numerical. Age, for example, is a numerical variable. Means and other statistical measures can be used to analyze and summarize this kind of data. Example of numerical scales that produce numerical data How many cows do you have? And how many pigs?

__ __

__ __

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 87

Page 91: M&E Workshop Handout Manual

Handout 5.18

Handout 5.18: TOOLS FOR QUANTITATIVE DATA ANALYSIS

a. Percentages and proportions Percentages and proportions are widely used and known, they are one of the most important tools for quantitative data analysis. Proportions are expressed relative to 1, percentages in relation to 100. Put in another way, a percentage is a proportion multiplied by 100. TIPS: • Include the number that you are basing the percentage on. • Don't calculate percentages for less than 30 cases. b. Tables One way tables or frequency distribution table (using one variable) Example: Percent distribution of women who do not want to use a contraceptive method in the future by reason.

Reason Percent Wants children 48.4 Lack of knowledge 16.8 Fear side effects 14.9 Leave it to nature 12.7 Husband opposed 5.7 Too expensive 1.5 Total percent 100.0

Two-way table or cross-tabulation (using two variables) Two-way tables or cross-tabulations are the basic tool to show relationship between two variables. Example Percent distribution of currently married women who do not want to use a birth spacing method in the future by reason, according to age.

Age Reason <30 30+ Total

Wants children 66.1 37.7 48.4Lack of knowledge 4.4 6.9 6.0Husband opposed 4.3 5.4 5.0Too expensive 0.8 1.4 1.2Fear side effects 9.2 18.4 14.9Leave it to nature 9.5 14.5 12.7Husband absent 0.0 1.2 0.7Difficult get pregnant 0.8 0.0 0.3Other 4.8 14.4 10.8 Total percent 100.0 100.0 100.0

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 88

Page 92: M&E Workshop Handout Manual

Handout 5.18

Independent and dependent variables A variable is a characteristic that is measurable, it can be numerical or non-numerical. When data form surveys, attendance records or other is used to analyze behaviors, attitudes, opinions, perceptions and beliefs, some variables will be used as "explanatory" or independent variables, which will help to explain the result of a dependent variable. In the example below, the perception of change in the economic situation in the household is the dependent variable, which is analyzed by 4 independent or explanatory variables: age, gender, education and economic status. Note that the percentages have to be created in the direction of the independent variable. This is the most basic tool to compare and explain differences between subgroups of the target population. Example Percent distribution of respondents by perception of change of the economic situation of the household in the year preceding the survey by background characteristics. Background characteristics

Improved No change Worsen Total percent

Age 15 to 34 15.6 50.9 33.4 100.0 35 to 49 14.3 40.0 45.6 100.0 49+ 10.2 44.4 45.4 100.0 Gender Female 9.7 39.8 50.4 100.0 Male 18.1 51.0 30.8 100.0 Education No schooling 3.7 42.6 53.7 100.0 Primary incomplete 15.9 47.5 36.7 100.0 Primary complete+ 20.3 42.9 36.8 100.0 Economic status Rich or well off 30.2 58.8 11.0 100.0 Poor or very poor 6.9 39.8 53.3 100.0 Total 13.5 44.8 41.7 100.0 c. Descriptive measures Average or mean The average or mean is used for numerical variables. It is obtained by adding all scores or responses together and dividing by the number of observations. _ Mean ( X ) = ΣX/n Σ = Greek letter sigma, meaning add or sum X = each individual observation n = total number of observations Example: Calculating the mean Name Age Ahmed Peter Ann Maria Alex

45 23 34 42 38

Mean age = (45+23+34+42+38)/5 = 36.4

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 89

Page 93: M&E Workshop Handout Manual

Handout 5.18

Disadvantage of the mean: it is easily influenced by extreme values. Example: Mean influenced by extreme values Name Income per month Ahmed Peter Ann Maria Alex

75 40 85 60 780

Mean income = (75+40+85+60+780)/5 = 208 Median The median is the middle observation, it says that half of the observations are smaller and half are larger than the median. It is not influenced by extreme values. For example, the median of the following numbers is 7, because half of the scores are below and half are above.

3,6,6,7,9,13,17 Example: Finding the median Name Income per month Ahmed Peter Ann Maria Alex

75 40 85 60 780

Income arranged in order: 40, 60, 75, 85, 780 Median = 75 Mean = 208 Example Mean and median amount (in US$) borrowed among respondents who borrowed in the year preceding the survey by gender . Gender Mean US$ Median US$ Female 108.6 40Male 186.1 60 Total 144.0 45 Measures of spread Mean and median give idea of center, but no idea of how dispersed or compact the distribution is. For example, the following groups of data have the same mean and median, however different range and spread. 8, 9, 10, 11, 12 (Range= 8 to 12) 0, 5, 10, 15, 20 (Range= 0 to 20) Measure of spread commonly used: standard deviation

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 90

Page 94: M&E Workshop Handout Manual

Handout 5.19

Handout 5.19: GRAPHING DATA Bar, pie and scatter graphs are the most commonly used graphs for presenting quantitative date. For two numerical variables, it is very useful to use scatter graphs in order to visualize their relationship. Example: Positive relationship

Average monthy income in US$ by years of schooling in Region X

0

50

100

150

200

250

0 2 4 6 8 10 12 14 16

Years of schooling

Aver

age

inco

me

Example: Negative relationship

Mean number of children ever born among women aged 40 to 49 by number of years of schooling in

Region X

012345678

0 2 4 6 8

Years of schooling

Mea

n nu

mbe

r of c

hild

ren

10

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 91

Page 95: M&E Workshop Handout Manual

Handout 5.19

Example: No relationship

Monthly interest rate paid on last loan by years of schooling

0

1

2

3

4

5

0 2 4 6 8

Years of schooling

Mon

thly

int

eres

t rat

e pa

id

10

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 92

Page 96: M&E Workshop Handout Manual

Handout 6.1

Handout 6.1: QUALITATIVE DATA

1. Characteristics and Use Quantitative methods produce numerical data and qualitative methods result in information which can best be described in words. Examples of qualitative data include descriptions of situations, events, people, interactions and observed behaviors; direct quotations from people, and excerpts or entire passages from documents, correspondence, records and case studies. “Qualitative methods focus on the signs and symbols that decode the reality seen by the target population.” Qualitative methods are iterative, there is an ongoing opportunity to revise interview protocols, guides, and observation record forms as a study progresses and new facts are brought to life. One of the most important sources of information for M&E of CARE projects is qualitative interviews. Projects are conducted in complex sociological, ecological, cultural and political settings. Systems are noisy and not as amenable to quantitative procedures. Comparison of Qualitative and Quantitative Data QUALITATIVE QUANTITATIVE Answers the question “Why?” Answers the question “How many?” A process of discovery Looks for evidence Helps to understand what is going on Measures Interpretive Descriptive Themes in Qualitative Methods

Naturalistic Inquiry - evaluator does not attempt to manipulate the program or its participants for purposes of the evaluation; want things to take their “natural” course as opposed to experimentaton - particularly useful for studying variations in program implementation (i.e. - what happens in a program often varies over time as participants and conditions change)

Inductive Analysis

-Qualitative methods are particularly oriented toward exploration, discovery, and inductive logic - begin with specific observations and build toward general patterns - qualitative analysis is guided not so much by hypothesis but by questions, issues and a search for patterns

Direct Contact With the Project (“Going Into the Field”)

- having direct and personal contact with people in the project in their own environments

Use of Case Studies

- depth and detail in qualitative methods typically derive from a small number of case studies - case studies are useful when the evaluation aims to capture individual differences or unique variations from one program to another. - The more a project aims at individualized outcomes, the greater the appropriateness of qualitative case methods

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 93

Page 97: M&E Workshop Handout Manual

Handout 6.2

Handout 6.2: WHEN TO USE QUALITATIVE METHODS

Process Evaluations aimed at understanding the internal dynamics of projects and their activities focus on the following kinds of questions

- what are the factors that come together to make this project what it is? - what are the strengths and weaknesses of the project? - how are beneficiaries brought into the project and how do they

participate? - what is the nature of CARE-Partner-Beneficiary interaction?

the need to generate an accurate and detailed description of program operations particulrly lends itself to the use of qualitative methods

Evaluating Individualized (Target Group) Outcomes

matching project goods and services to the needs of individuals or target groups (e.g. - many educational projects place emphasis on the unique and individual needs of a child; often income differences make target groups different and thus require different needs)

reproductive behavior projects are a good example of the type of programs CARE implements where one may want to evaluate outcomes based on individual or group response

whenever one seeks behavioral change there will be differences among individuals and among various target groups

Case Studies

evaluating individualized outcomes (described above) is one area where case studies are used, but there are others

we use case studies to evaluate peculiar occurences - unusual successes or failures, dropouts to projects, etc.

documenting pilot projects, especially the testing of interventions detailed sampling schemes exist for selecting critical and relevant cases for

study Implementation (Formative) Evaluation

tests whether a project is progressing according to design conducted for the purpose of improving programs when outcomes (goals) are evaluated without knowledge of implementation,

the results do not provide decision makers with information about what actually produced the outcomes, a “black box” approach to evaluation

an important way of evaluating implementation is to gather detailed, descriptive information about the project in order to answer questions such as:

- what’s working and what’s not - what do beneficiaries in the project experience and what are their

perceptions - what services are being provided to target groups - what do staff do to produce goods and services - what is it like for beneficiaries to be part of the project

aimed at improving the quality of project activities and outcomes, not just levels of attainment, and judgements about quality often require qualitative data

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 94

Page 98: M&E Workshop Handout Manual

Handout 6.2

Adding Depth, Detail and Meaning to Quantitative Analyses

produces new insights or at least forces people to think about old insights in a new way

adds description to why things are the way they are

Summative Evaluation key questions which answer specific implementation questions aimed at producing lessons learned

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 95

Page 99: M&E Workshop Handout Manual

Handout 6.3

Handout 6.3: THE CASE STUDY

Case studies are detailed examinations of a relatively few persons or items. In most case studies, subjects are not chosen by a formal sampling process. Often subjects are self-selected or selected on the basis of relevant features. In the type of work CARE does, case studies typically focus on an organization, community or target group. Some common types of case studies are:

1. community studies: the unit of enumeration is the community and the number of units studied may be one or more;

2. trace studies: the study is based on recorded cases obtained from clinics,

social workers; health workers, village monitors, etc. The study involves tracing the individuals concerned and relating their history and background to the phenomenon that led to the making of the record;

3. pilot surveys: the testing of the methodology, questionnaire, and interview

techniques to be used in a larger survey on a few respondents chosen to represent a wide range of types;

4. detailed activity studies: focus on a particular behavior or activity of groups or

organizations, for example the study of labor inputs into cropping cycles or the study of personal hygiene habits in households;

5. supplementary surveys: for example, a village-level survey of nutritional levels

may be supplemented by detailed observation and questionning of a small sub-sample.

Strengths of Using Case Studies

• provides in-depth, detailed analysis Guidelines: Interviews must be conducted by those skilled both in the subject matter and in the art of in-depth interviewing.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 96

Page 100: M&E Workshop Handout Manual

Handout 6.4

Handout 6.4: QUALITATIVE EVALUATION CHECKLIST

Qualitative methods are not valid for every evaluation situation or for the study of all evaluation questions. Many of CARE’s indicators are purely quantitative in nature, for example. The following is a checklist of questions which can be used to help decide if qualitative methods are an appropriate evaluation strategy. If the answer to any of the questions is yes, then the collection of some qualitative data is likely to be appropriate. 1. Does the project emphasize individualized outcomes, - that is, are different

participants (or target groups) expected to be affected in qualitatively different ways? And is there a need or desire to describe and evaluate these individualized outcomes?

2. Are decisions makers interested in finding and understanding the internal dynamics of the project - project strengths, weaknesses, and the overall process that is (was) followed? This gets to what we call “lessons learned.”

3. Is detailed, in-depth information needed certain target group cases or project sites, for example, particularly successful cases, unusual failures, or critically important cases for programmatic, political, financial or other reasons?

4. Is there interest in focusing on the diversity among individual beneficiaries or target groups?

5. Is information needed about the details of project implementation, including items such as services, organization, management, partnering, staff activities, etc.

6. Is there interest in formative evaluation (i.e. - finding out how to improve the project)? 7. Is there a need for information about program quality - descriptive information about

the quality of the project activities and outputs, not just levels, amounts, or quantities of program activity and outcomes?

8. Are decision makers or donors interested in having evaluators conduct project site visits so that evaluators can be the surrogate eyes and ears for decision makers and donors who may be too busy to make such visits themselves?

9. Is the obtrusiveness of evaluation a concern? Will the collection of qualitative data generate less negative reaction among project participants? This applies often, for example, for evaluating changes in income.

10. Are the goals of the project vague, general, and nonspecific, indicating the possible advantage of using certain goal-free approaches?

11. Is there a possibility that the project may be effecting beneficiaries in unanticipated ways (both positive and negative) so that a discovery method is needed?

12. Is the evaluation exploratory? Is the project at a pre-evaluation phase, where goals and project content are not fully realized?

13. Is there a need to add depth, detail, and meaning to statistical findings or survey generalizations?

14. Has the collection of quantitative evaluation data become so routine that no one pays much attention to the results anymore, suggesting a possible need to break the old routine and introduce something new?

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 97

Page 101: M&E Workshop Handout Manual

Handout 6.5

Handout 6.5: NATURE OF QUALITATIVE DATA

General Nature In some senses, all data are qualitative; they refer to things about people, objects, and situations. We have a “raw” experience which is converted into words (“His face is flushed.”...”He is angry.”) or into numbers (“Six voted yes, for voted no.”...”the thermometer reads 74 degrees.”). But qualitative data is in the form of words - that is, language in the form of extended text. (Qualitative data also can appear as still or moving images). The words are based on observation, interviews, or documents (or as others have put it, “watching, asking, or examining”). These data collection actvities typically are carried out in close proximity to a local setting for a sustained period of time. This results in three basic kinds of data collection: (1) direct observation; (2) in-depth, open-ended interviews; and (3) summarizing or extracting from written documents, including such sources as open-ended written items of questionnaires, personal diaries, project evaluations, and program records, to name only a few. The data from open-ended interviews consist of direct quotations from people about their experiences, opinions, feelings, and knowledge. The data from abservations consist of detailed descriptions of program activities, participants’ behaviors, staff actions, and the full range of human intercations that can be part of program experiences. Document analysis yields excerpts, quotations, or entire passages from records, correspondence, official reports, and open-ended surveys.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 98

Page 102: M&E Workshop Handout Manual

Handout 6.6

Handout 6.6: QUALITATIVE DATA COLLECTION

INTERVIEWS I. QUALITATIVE INTERVIEWS One of the most important sources of information for needs assessment is qualitative interviews. They are the most useful tool in understanding the felt needs, perspectives, attitudes and beliefs of the target population. Qualitative interviews can also be used to generate ideas and themes which can then be tested on a wider population using a structured questionnaire. IA. Types of Qualitative Interviews 1. Informal, Conversational Interviews The interviewer has complete freedom to explore a broad range of subjects with the respondent. Issues as they emerge can be further explored. The interviewer generally takes very few notes. More than just a casual conversation. The interviewer has a purpose in mind and must control the conversation to serve that purpose. Limitations: 1. time consuming

2. the conversation can become unfocused and wander in circles 3. information gathered from one respondent may not be comparable to that

from another 4. highly susceptible to "interviewer effect"

Strengths: 1. allows a wide range of issues to emerge

2. more may be revealed than in a more formal setting 3. especially useful in diagnostic surveys

2. Topic-focused Interviews (Topical Interviews) Topic-focused interviews are conducted using an interview guide which is also called a topical outline. It lists the main topics and sub-topics to be explored. The interviewer, however, uses her/his judgment on how to use the guide in a way to permit smooth flow of discussion. Although the guide is used, it does not prohibit exploration of other topics. Advantages: 1. Since interviewers cover the same topics the information is more

comparable. 2. Subjects stay within the context of interest and saves time.

Example of an Outline for a Topic-focused Interview A project is promoting use of improved millet that requires the application of a new fertilizer. Monitoring data from the project suggests that use of the new seed and fertilizer is not increasing as rapidly as expected. The project staff decides to interview farmers and ask them why they are or are not using the new technology. The interview guide would list topics and sub-topics similar to the following:

1. The farmer's understanding of the composition of the technical package. (What is it all about? What does it involve? What does one have to do in order to adopt it?)

2. The farmers perceptions regarding the advantages and costs of the technical package. (What may be gained by using the new millet variety? Is it economical (profitable)? What are the risks? How much investment does it

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 99

Page 103: M&E Workshop Handout Manual

Handout 6.6

require? How easy or difficult is it to obtain the new millet seed and fertilizer?)

3. The farmer's opinion about the relevancy of the new technology. (Are there constraints on land, credit, or labor that prevent adoption?)

4. The farmer's views regarding the availability of services. (Is there somebody to show them how to use the new technology? Do they have confidence in these services?)

5. The farmer's assessment of the risks involved. (Is the new variety of millet seen as less reliable than the traditional variety? Are there consumption, preparation, or taste problems?)

6. The farmer's assessment of the potential rewards. (Is he interested in increasing production? What would he do with any surplus production?) Background information. (Availability of labor in the HH, cropping patterns, etc.)

3. Semi-structured, Open-ended Interviews Semi-structures, open-ended interviews are the most structured form of qualitative interviews. They use an open-ended questionnaire which lists the specific questions to be asked. They are similar to interviews conducted for structured surveys but differ from them in three ways.

1. Semi-structured interviews use open-ended questions, respondents encouraged to express themselves fully rather than respond to predetermined options.

2. The sequence (order) of the questions is not predetermined and the interviewer has control over which questions are asked and in what order.

3. The interviewer can ask additional questions to explore topics further. Strengths:

1. The information obtained specifically answers certain questions that project managers wish to address.

2. The information from different interviewers is comparable enough to generate simple frequencies, although the main emphasis will still be placed on an in-depth understanding of the respondents.

3. Compared to other qualitative interviews, success is less dependent on the interviewer's communication skills and experience.

4. Can be conducted faster than the other types of qualitative interviews. 5. General Limitations of Qualitative Interviews 1. They do not generate quantitative data that can be summarized to make general statements about the population. For example, it is difficult to say that 60% of the farmers are satisfied with the existing extension services.

2. It is difficult to use qualitative interviews based on probability samples. This means that the selection of respondents is often biased. One common mistake is that interviewers use respondents of higher social or economic status.

3. Findings can be easily based on biases of the interviewers. Common for interviewers to hear more information when it conforms to their own thinking or opinions.

6. Guidelines for Qualitative Interviews

Initial Contact: very important; establish the basis for a comfortable interview; appearance, style, manner of introduction, clothing, gender can all make a difference

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 100

Page 104: M&E Workshop Handout Manual

Handout 6.6

Sequencing of Questions: polite to begin with general conversation; can volunteer

information about yourself, the project; begin with simple questions

Wording of Questions: questions must be posed in an understandable way;

language and expressions obviously important; phrase questions in a way to encourage detailed responses (generally avoid yes or no questions); do not pose two or more questions simultaneously

Example: Form of Questions Leading to Varying Responses

Leading to Leading to yes or no response detailed response

Have you heard of the extension services What do you know about the extension operating here? services available in this area? Do you think that if you use _______ What is your opinion about the risks and your production of millet will increase? benefits of using _____ in producing millet? It is sometimes said that those marketing What is your view regarding how seeds seeds only deal with the large farmers? Are marketed in this area? Is this true? Has the CARE project been successful? What effects has the CARE project had on

you and your family? On your neighbors? Do you have difficulty in obtaining improved Please describe how you go about obtaining millet seed? improved millet seed? Would you grow more millet if the government What would be the effect on you if the r aised the price of millet? government raised the price of millet?

Role Playing: role playing can be very effective; lets respondent assume the role of somebody else (e.g. - asking a farmer “What should be the duties of an agricultural extension worker”?, or “Suppose somebody wanted access to family planning services, where would they go”?); interviewer can also assume a role (“What advise would you give me to improve the marketing of millet seed”?); don’t ask people to assume roles for which they can’t readily identify

Vignettes: respondents are given imaginary people in imaginary

situations and asked to decide what those people should do.

Example: a) Suppose a poor community needed to obtain a teacher for its school. Do you think that members of the community should have to collectively pay to hire the teacher? b) Suppose one or more households could not contribute. Do you think their children should be allowed to attend school? c) Do you think everyone in the community should contribute the same amount, irrespective of their income? Irrespective of whether or not they have children who would be attending school?

Probing: essential skill for effective interviewing; often necessary to

encourage respondent to be more specific; use encouraging

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 101

Page 105: M&E Workshop Handout Manual

Handout 6.6

words or nods of the head; think of follow-on questions that begin with How, Why, Where, etc.

Controlling some respondents get off the point and start giving you Conversations: opinions on everything; try non-verbal communication (e.g. -

quit nodding your head) or interrupt politely

Neutral Attitude: avoid giving the impression of having strong views on the subject; sometimes you can state both sides of an issue to demonstrate your neutrality (e.g. - loan pay backs)

Recording the use of a tape recorder is optimal for recording but may inhibit Interview: some respondents; have second person take good notes; ask

before recording and reassure that information is confidential; write up interview as soon as possible

II. GROUP INTERVIEWS There are two general types of group interviews: community interviews and focus-group interviews. Community interviews generally use an interview guide and have more than 15 participants. Usually only a small number of questions are asked and each participant is not expected to answer all the questions individually. An interdisciplinary team is often used to ask the questions. These types of interviews are often exploratory. A focus-group interview is conducted with a small group, usually 6-10 participants. One of the real features is that respondents discuss ideas, issues, insights and experiences among themselves. If properly run, a focus-group interview can provide very rich detail on a subject. Participants are selected based on established criteria. There are three main reasons why group interviews may be preferable to individual interviews: 1. Group interviews allow you to gather information in a fast and cheap (economical)

manner. One can interview 8-10 people in one or two hours, whereas to interview 8-10 people individually may take two days.

2. Group participation often reduces individual inhibitions and may reveal information not otherwise revealed. Of course, the opposite may also occur depending on subject matter.

3. Information from a group interview can represent a consensus (or average) and be more representative of the general population, especially if structured formats are used and the group has to agree on a common answer. In group interviews, respondents are reluctant to give wild or inaccurate answers.

Example: Group Interview for Generating Quantitative Data In a research project in Costa Rica, a structured questionnaire was used in combination with a topical outline for interviews with 860 communities for generating community-level statistics. Below are some examples of the questions used. 7. What is the daily wage of an agricultural worker in this area?_____

7.1 For how many hours?_____ 7.2 Does this include: Yes No

Food:_______ _____ Housing:____ _____ Land:_______ _____ 9. What are the tree main crops grown: __________ __________ __________

9.1 Which is the most important? __________ The second most important? __________ The third most important? __________ 9.2 How much is sold commercially?

Almost More than Less than Little or

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 102

Page 106: M&E Workshop Handout Manual

Handout 6.6

Crop all half half none 1 2 3

14. Where do people generally go to buy the things they cannot buy here locally? Community Market __________ District Market __________ Provincial Market __________ Other __________ 14.1 How do they go there? __________ 14.2 How long does it take? __________

The qualitative data generated through community interviews can be aggregated in two ways. First, individuals can be treated as cases. For example, suppose ten group meetings are attended by a total of 200 farmers. If 80 farmers say that they grew improved millet seed, it can be reported that 40% of farmers interviewed grew improved millet seed. Second, each group can be treated as a case. For example, in the above example you may be able to report that in four out of ten villages the majority of farmers are using improved millet seed. Be careful, however, how you evaluate aggregated findings generated by community interviews. They have some validity only if certain conditions are met, for example:.

1. Participants must be representative of the target population. 2. Group processes must not inhibit free expression of feelings or preferences. 3. The questions must not be politically or culturally sensitive.

III. FOCUS-GROUP INTERVIEWS A focus-group interview is a type of group interview, but there are some important differences. Focus group interviews are difficult to conduct and require a skilled moderator. They are much more than simply a conversational interview with a group. Other differences include: optimal group size; selection of respondents; their expected contribution; and the nature of spontaneous reaction. A focus group is commonly used when you want to explore ideas, reactions, or recommendations from a collection of respondents whom you know have information and opinions about the focus group topic. You use purposeful sampling to select members of the focus group based on a set of criteria that makes the group similar or homogeneous(based on, for example, age, sex, interests, income, employment). The criteria depend on the discussion topic. You need a skilled moderator to guide a discussion group to ensure that:

the discussion does not stray from important topics issues are explored in detail some members don’t talk too much and others too little points are made clear

You also need a recorder to take notes about what is being said and to observe respondents behaviors during the session. Tape recorders are recommended for recording information that can later be transcribed. Focus groups provide qualitative data about how the group members feel about a given topic. They are not evaluative, therefore results cannot be generalized to the general population and are not statistically valid. It is important to transcribe and code your data as soon as possible after the interview.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 103

Page 107: M&E Workshop Handout Manual

Handout 6.6

Strengths:

useful for: identifying prolems and their causes developing messages for communication or social marketing strategies understanding felt needs determining areas to focus further data collection activities on

information can be gathered efficiently since you know group members will have knowledge and opinions of topic

exploring behaviors, beliefs, and attitudes Weaknesses:

cannot generalize findings because members are not selected based on mathematical probability. Nor should you try to quantify responses or generalize them to the population. For example, it is inaccurate to say that because 25% of the focus group reported using a family planning method, you can conclude that 25% of the population uses a family planning method.

takes a skilled moderator Guidelines: Interview Guide: only type recommended is a short checklist of topics and is less

structured than other guides; one of primary objectives is to explore each topic in greater depth than other interviews; the topics that generate the real interest and excitement may not have been anticipated.

In a focus group on use of a specific family planning method, one of the participants may make a casual remark that more people would use a method if the information were made more available. This may lead to a discussion of social marketing for family planning or other styles of communication and information dissemination.

Size & Composition optimal generally ranges from 6-10; in smaller groups people often of the Group: feel more pressure; larger groups leave too little time for individual

expression; range of socio-economic backgrounds and social status should be considered; better if the members do not know each other well

Selection of consult key informants who are knowledgeable about local conditions; Group: select subset from a long list; try to include diverse participants Seating, etc.: Use round table or circle when convenient; avoid large crowds by not

holding in open; should not exceed 2-3 hours Controlling Flow: don’t let a few dominate discussions; minimize group pressure

(important to let those with minority viewpoint speak); set ground rules early; encourage group to consider alternatives if consensus is too quick

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 104

Page 108: M&E Workshop Handout Manual

Handout 6.7

Handout 6.7: OBSERVATION

Observation is the systematic method of watching or observing people’s behavior or other phenomena and recording the results. It not only includes sight, but also smell, feelings, touch, and other sensory perceptions. The data collector is the observer who carefully watches or senses what others are doing. Some examples include:

mothers feeding their children farmers applying fertilizer consumers making purchases at the market grain storage facilities women interacting while drawing water from a standpipe

You can collect both qualitative and quantitative data using observation. When you use a structured observation instrument, the observer records events as being present or absent, having occurred or not occurred, or how many times they occurred. These result in quantitative data. In using an unstructured instrument, the observer takes extensive notes about what he or she sees or senses. Later these notes are organized and the event is described. This usually results in qualitative data. There are two general types of observation: participant and non-participant. Participant Observation is relatively rare in development assistance projects. The observer lives with and takes part in the daily activities of the subjects. The participant observer may help take care of the children, cook, work in the fields, or do some other activity with the people being observed. Ethnographic studies often employ this technique. Those being observed generally do not know the extent to which the observer is studying their behavior. That is important because you want them to go about things as they would if the observer were not there. Also, the observer needs to record the observation out of the sight of those being observed. When people know they are being watched, they tend to act differently. This can give you unreliable data. What steps do you take and what do you say to get the subjects to let the observer live with them? You can explain that you want to learn more about the community. Non-participant Observation does not require the observer to live with the subjects. The observer often uses some sort of observation guide to make and record observations. The major advantage of observation is that the data are generally very reliable, because observation is unobtrusive. It gives a clearer picture of the “real” situation. It is harder to get a “real” picture with interviewing techniques because people may not answer questions truthfully. There are, however, a couple of disadvantages you should consider before deciding to use observation: cost, and the objectivity of the observer. Some observation methods, especially participant observation, require that observers spend long time periods collecting information. This can cost a lot of money in terms of time and effort. Calculate and compare the cost of conducting observation to the value of the information you expect to get. Does the information obtained justify the cost?

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 105

Page 109: M&E Workshop Handout Manual

Handout 6.7

Another drawback is that the reliability of the data is dependent on the objectivity of the observer. Field Visits. Project managers often use an informal type of observation during field visits. They keep a mental check list of events they wish to observe. After the visit they recall the observations and record them or discuss them with staff. Another type of observation technique is photography. You can take before and after photos of some phenomenon and examine them to determine whether or not change took place. OBSERVATION SCHEDULE FOR PARTICIPANT OBSERVATION (USE ONLY ONE FORM PER DAY) Name of Observer________________________ Date_________________ Name of head of household_________________________________________ Number of people living in household_________________________________ 1. How many people did you observe today eating breakfast?_______ 2. How many people did you observe today washing their hands with soap, citron, or ash

before they ate breakfast?_______ 3. How many people did you observe today eating dinner?_______ 4. How many people did you observe today washing their hands with soap, citron or ash

before they ate dinner?_______ 5. How many people did you observe today using their latrine?_______ 6. How many people did you observe today washing their hands with soap, citron or ash

after using the latrine?_______ 7. How many trips to get water did you observe today?_______ 8. In how many of these trips did you observe someone in your household using a clean

water vessel?_______

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 106

Page 110: M&E Workshop Handout Manual

Handout 6.8

Handout 6.8: CHOOSING A SAMPLE: THE LOGIC OF PURPOSEFUL SAMPLING Purposeful sampling for qualitative data is different than probableistic sampling common for quantitative data which relies heavily on statistics. The power in quantitatve data depends on selecting a truly random and representative sample which permits confident generalization from the sample to the larger population (usually the target group, a village, or a class of people such as wealthy or poor, farmers or non-farmers, etc.). The power in purposeful sampling for qualitative data is in selecting information-rich cases for in-depth study. Information-rich cases are those from whom one can learn a great deal about issues of central importance to the evaluation. For example, if the purpose of an evaluation is to discover how a project is effecting groups of lower socio-economic status, one can learn more by focusing on understanding the needs, interests and incentives of a small number of carefully selected poor families than gathering information from a large, random and statistically significant population. Strategies for Selecting Purposeful Samples 1. Extreme or Deviant Case Sampling 2. Maximum Variation Sampling 3. Homogeneous Samples 4. Typical Case Sampling 5. Critical Case Sampling 6. Snowball or Chain Sampling 7. Criterion Sampling 8. Confirmatory or Disconfirming Cases 9. Convenience Sampling 10. Random Purposeful and Stratified Purposeful

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 107

Page 111: M&E Workshop Handout Manual

Handout 6.9

Handout 6.9: QUALITATIVE DATA ANALYSIS AND INTERPRETATION

Analysis is the process of bringing order to the data, organizing what is there into patterns, categories, and basic descriptive units. Analysis begins even while data is being collected. Evaluator should become particularly sensitive to looking for alternative viewpoints and contrary patterns. Interpretation involved attaching meaning and significance to the analysis, explaining descriptive patterns, and looking for relationships and linkages among descriptive units. The most serious and central difficulty in the use of qualitative data is that methods of analysis are not well formulated. The analyst faced with a stack of qualitative data has few guidelines for protection against self-delusion, let alone the presentation of unreliable or invalid conclusions to scientific or policy-making audiences. The evaluator has two primary sources to draw from in organizing the analysis:

1. the evaluation framework (questions) that were developed during project design (this could include indicators and key questions) and

2. analytic insights and interpretations that emerge during data collection. Steps in qualitative data analysis 1. Focus the Analysis

- focus comes from the project design: indicators and key questions - negotiate early around the purpose of an evaluation

2. Organizing Qualitative Data for Analysis

- data are usually voluminous, and sitting down to make sense out of pages of data can be overwhelming

- first thing to do is make sure it’s all there! - transcribe the interviews - make a copy of the data (will need to cut and paste later)

a. Qualitative Description

- pure description of the project and experiences of people in the project, often written in narrative form

- purpose of the description is to let the reader know what happened in the project and what participants experienced

- evaluator tries to look for the typical project experience based on his/her simple interpretation

b. Case Analysis

- cases can be people, target groups, critical events, communities, project sites, etc., but the case is the basic unit of analysis

- first step is to pull together the data relevant for each case and to write a descriptive, holistic case study, sometimes the entire analysis is only one case study

- at the individual level, case data can include clinical records, background information, interviews, observations

- at the project level, case data can include project documents, project reports, interviews with beneficiaries and staff, observations of the project, and project histories.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 108

Page 112: M&E Workshop Handout Manual

Handout 6.9

- analysis starts with writing a case record which pulls together and organizes the data; information is edited, redundancies are pulled out, and parts are fitted together

c. Content Analysis

- content analysis involves identifying coherent and important examples, themes and patterns in the data

- look for qoutations or descriptions that closely match and that represent a theme, issue, problem or concept

- begin by reading through field notes, records , interviews, case studies, etc. while writing comments in the margins indicating what can be done with the different parts of the data; this is the beginning of organizing data into topics (like constructing the index for a book or labels for a file system

- where more than one person is working on the data it is important to have each do their own content analysis and then compare

d. Inductive Analysis

- inductive analysis means that themes, patterns or categories emerge from the data rather than being decided prior to data collection and analysis

two kinds of patterns can emerge from data:

- analyst can use the categories developed by people in the project (indigenous typologies)

Example: hamburgers - hamburgers can vary a great deal; there are many ways to prepare them or to add to them, and yet they are still called hamburgers. However, when a piece of cheese is added to the meat, we call it a cheeseburger. The task for the evaluator is to find out what separates “hamburger” from “cheeseburger.” How people construe their world from the way they talk about it.

Example: evaluting a project which aims at keeping girls in school longer. In observations and interviews it became important to understand the way teachers categorized students. With regard to problems of dropouts, teachers labeled young girls as self-motivated or victims. The low-motivated students were ones who would not have continued school regardless of HH circumstances because they weren’t motivated and did not value education. The victims were students who sincerely wanted to learn but whose parents could not afford to keep them in school or didn’t value education. Important in the project to understand differences between the two groups.

- analyst can develop own terms based on his/her interpretation of the data

(analyst-constructed typologies)

The primary purpose of typologies is to describe and classify.

(See page 150, How to Use Qualitative Methods in Evaluation)

e. Logical Analysis One of the most important sources of information for M&E of CARE projects is qualitative interviews. Projects are conducted in complex sociological, ecological, cultural and political settings. Systems are noisy and not as amenable to quantitative procedures.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 109

Page 113: M&E Workshop Handout Manual

Handout 6.9

Analysis - challenge is to separate description and explanation. Coding: Lofland provided a classification of ‘social phenomena’ which can usefully be employed as the basis for a coding scheme:

1. Acts. Action in a situation that is temprally brief, consuming only a few seconds, minutes or hours.

2. Activities. Action in a setting of more major duration - days, weeks, months - consuming significant elements of persons’ involvement.

3. Meanings. The verbal production of participants that define and direct action.

4. Participation. Persons’ holistic involvement in, or adaption to, a situation or setting under study.

5. Relationships. Interrelationships among several persons considered simultaneously.

6. Settings. The entire setting under study conceived as the unit of analysis.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 110

Page 114: M&E Workshop Handout Manual

Handout 6.10

Handout 6.10: FIELD RESEARCH FOR QUALITATIVE STUDIES

Significant skills and experience required in: • Listening techniques • Interviewing • Ability to handle group dynamics • Ability to reflect and summarize • Open attitude • Excellent interpersonal skills • Ability to adapt on the spot • Analysis ability

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 111

Page 115: M&E Workshop Handout Manual

Handout 6.11

Handout 6.11: CODING AND ANALYSIS

Coding is often a difficult step in the analysis of qualitative data and is time consuming. A common way to code qualitative data is to let the indicators determine categories, then assign codes to strings of quotes or descriptions that belong to a category. You will want to code close-ended or structured responses when the instrument is made. Code open-ended responses, especially when there is a lot of qualitative data, after data reduction. Transcribe qualitative data from field notes or cassettes to summary tables or computer, then code and categorize data. Matrices are popular summary tables for qualitative data. In the matrix below, quotes would be organized by who said it (project manager, farmers, and field staff) and what they said about pesticides, training and technical assistance.

SAMPLE SUMMARY MATRIX

Project Manager Farmers Field Staff

Pesticides “Pesticides from “Made me sick” “Farmers need

local markets are “I got a rash” more training in not good choices” safety measures”

Training “Trainers too “Too short” “Too much written

directive” material” Technical “Field staff need “Very helpful” “Need motorcycles A ssistance more training” “Increased yield” to go to the field” To analyze qualitative data, you want to get percentages, rates, or frequency counts and still not lose the quotes or descriptions. Here is one way to do it:

Develop categories based on indicators or key question. Assign qualitative data such as quotes, descriptions or summaries to the

appropriate category. Calculate values by counting, for example, how many people responded a

certain way or behaved a certain way. Use actual quotes or descriptions to support the values.

Example:

Key Question: Why don’t some families of the project grow trees on their land?

Indicators: Percentage of farm families who say they don’t have enough money to purchase cuttings.

Percentage of farm families who say they don’t want to take the risk of growing trees. Percentage of farm families who say they are afraid the government will take their trees once they are productive.

Percentage of farm families who give another reason.

Then organize these categories into a matrix and list select quotes under each category.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 112

Page 116: M&E Workshop Handout Manual

Handout 6.11

ANALYZING QUALITATIVE DATA

Investment Risk Government Other “I have to pay “What if the “I don’t trust the “Too busy” school fees” trees die?” government” “My daughter’s “Trees die if they “My brother’s trees “Nobody to do getting married don’t have water” were cut by soldiers” the work” n ext year” Count the number of respondents with quotes in each of the categories. Let’s say we counted quotes and came up with the following numbers.

Investment = 55 respondents Risk = 22 respondents Government = 35 respondents Other = 9 respondents

TOTAL = 121 respondents

To calculate indicator values, figure the percentages: divide number of respondents in each category by the total respondents (121) and multiply by 100:

45.5% of respondents say they don’t have enough money to invest in trees. 18.2% of respondents say they don’t want to take the risk of growing trees. 28.9% of respondents say they don’t trust the government enough. 7.4% of respondents give other reasons.

To answer the key question, we can say that our information suggests that the primary reason that families don’t grow trees is that they don’t have enough money to invest. This information is important and useful. However, the quotes from families add a whole other dimension of information that can help us decide what to do about promoting tree production on farmers’ fields. PRESENTATION OF QUALITATIVE DATA One of the simplest ways of presenting qualitative data is using a matrix similar to many tables used to present quantitative data. There are two basic types: simple and compound.

SIMPLE MATRIX

Key Question: Why don’t mothers bring their children to be immunized?

Health center Nurses treat Mothers did Other is too far mothers badly not know about (10%) (30%) (55%) services (5%)

“It’s a three “She acted like I “I have never “Clinic is hour walk” was stupid” heard of this dirty”

thing”

“I can’t afford “They made me “Needles

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 113

Page 117: M&E Workshop Handout Manual

Handout 6.11

transport” wait six hours” cause AIDS”

“I can’t take a “She just grabbed “The children full day” the baby” have to work”

A compound matrix has two sets of categories. One is listed horizontally and the other is listed vertically. Place data in the cells that go with each set of categories.

COMPOUND MATRIX Key Question: Why don’t mothers bring their children to be immunized?

Mother’s Province Reasons

Health center Nurses treat Mothers did Other too far away mother’s badly not know (35%) (30%) (15%) (20%)

North (22%) South (18%)

(Quotes go here) East (27%) West (33%) Interpretation is an excellent opportunity to bring together key stakeholders to examine data and make judgements about goals, outputs, and activities or key questions. Qualitative data are often richer and more helpful than quantitative data. For example, specific quotes from respondents about why they don’t use credit is more meaningful than the number of respondents who say they don’t use credit “because it is too expensive.” The advantage of qualitative data becomes a disadvantage during data analysis. Quotes from 100 different people about why they don’t use credit are much more difficult to make sense of then 100 responses to a set of closed questions about using credit. Analysis of qualitative data is a lot easier if you focus data and determine how to organize data ahead of time. For example, if you want to know why some families don’t want to grow trees, you develop several indicators for this question. Indicators reflect your guesses about why families don’t grow trees or they come from a first-order analysis of your data.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 114

Page 118: M&E Workshop Handout Manual

Handout 6.12

Handout 6.12: QUALITATIVE DATA ANALYSIS EXERCISE

Below are quotes from field studies which were conducted during an evaluation to answer a key question about community problems. Code the data according to themes or categories that you find. Next, construct a compound matrix to show results, including percentages for each category you developed. Hint: First code the data in the margins according to major themes that you discover. I something is only mentioned once it probably does not represent a theme and you may want to put it into a category called “Other.”

Key Question: What do members of the community feel is their key problem? VILLAGE A “We need education for all of our children. How are they supposed to get jobs if they have no education?” “Our farm land is getting worse. Twenty years ago we had twice the rice production we have today” “My two daughters want to go to school but the nearest school is over twenty kilometers, and besides, I need them to help me here in the home” “When people get sick they just die. This has got to stop.” “If I only had some savings I could buy more land and other things for my family” “People round here don’t have enough money to buy food and things” “We need building materials to strengthen our houses” “The health clinic is too far away.” “Many children are sick since the flooding last year. We need our children to be healthy.” “I sure wish the extension services would show us how to grow more and better food.” “I want to send all of my children to school so they can go to Phnom Penh and get good jobs.” “If I only had some savings I could buy more land and other things for my family” “Prices are going up all the time but our income stays the same. We have no money to purchase oil and other things for cooking.” “My rice crop has failed two years in a row. I can’t understand why.” “We barely grow enough to feed our family, much less have surplus to sell for cash.” “You see that people are sick and our children are skinny. What can we do about this?” “The nurses at the clinic do not know anything about diseases.”

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 115

Page 119: M&E Workshop Handout Manual

Handout 6.12

“Our only teacher left last year to get a better job in the city and we need a new teacher.” “There is so much fighting around here nobody feels safe.” “We need money for food and better shelter.” “Many women die while giving birth and I think this is a serious problem.” “We need higher incomes so that we can provide for our families better.” “People here need to find employment in the dry season to earn money.” “We need a new clinic with free medicines.” “My children have diarrhea almost every week.” “I want to have my leg fixed. It was broken two years ago and has hurt since.” “The land is not as fertile as it once was and we need to use more fertilizer.” “Our biggest problem is lack of good health services.” “My son graduated last year from school but now wants to go to university. I don’t have enough money to send him to university in Phnom Penh so I don’t think he will go there. Maybe it’s better if he stays here and helps with the farm.” “People die because they can’t get to a doctor. It’s a real shame.” “Nobody’s health here is very good.” “We need more money to purchase food and stay in good health.” “You see here in my house that I have nothing. I have no money to buy my kids shoes. What can I do? VILLAGE B “If I only had some money I could buy more land and other things for my family” “We need water for our crops. Last year the irrigation canal was almost dry and many farmers could not grow good crops.” “We could use a new school. Our school is very crowded and children have to sit on the floor where it’s dirty.” “If I only had some savings I could buy more land and send my children to college.” “You see that people are sick and our children are not healthy. We need better village health care workers.” “The nurses at the clinic do not know anything about diseases and they treat us badly.” “We need loans to start up new business. Many people have no or very little income.” “Money. Money. Money. The world runs on money you know!.”

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 116

Page 120: M&E Workshop Handout Manual

Handout 6.12

“Better health facilities. No question about it.” “Many people will tell you the problem is health, but I say it’s lack of money. If you have money you can use good doctors.” “We need to be able to grow more food. This requires fertilizer, good seed, and plenty of water for our crops.” “I want a job for each of my kids. They don’t want to farm and if they can’t make money here I fear they will move to the capital.” “We all are very poor in this community. Happy, but poor.” “I had 100 kg of rice to sell last week but received a very low price. How do I survive on so little money? Everything now is very expensive.” “I think our biggest problem is that everyone in this community is too lazy.” “We need more and better doctors.”

“Most people around here would probably tell you that health is a major problem, but I think it’s income. Most people are very poor.”

“We need affordable medicine. People used to know local medicines from plants, but now nobody remembers how to cure themselves and must go to the clinic.”

“We need to grow more rice so everybody has a full belly and we can sell the rest in the markets at a good price.”

“My only son died least month because we couldn’t find a doctor. So you tell me what the problem is and what you can do to help us.”

“I heard an American once say “Money cannot buy happiness.” I would like to find out for myself if that is true because I have no money.”

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 117

Page 121: M&E Workshop Handout Manual

Handout 6.13

Handout 6.13: HEALTH PROJECT CASE STUDY by Dan O'Brien

CONSULTANT'S OBSERVATIONS

The following is a case study from an actual child survival project managed by CARE-Bolivia. The study begins with an overview of the project which includes project areas, priority population, major activities, and staffing. The second part of the study contains a list of 8 major observations made by an outside consultant during a visit in the first year of the project. Pretend your group is managing this project and just received the list of observations from the consultant. Make a list of 10 questions for which your team would like to have answers to address the observations with the intention of improving the project. BACKGROUND OF THE PROJECT The Rural Bolivian Health Education Project (RUBHEP) is a three year $822,000 project partially funded by the Agency for International Development (AID) Child Survival Grants Program and managed by CARE-Bolivia. The aim of RUBHEP is to lower childhood mortality rates in rural Bolivia, the highest in the western hemisphere, by focusing on immuno-preventable disease, diarrhea and resulting dehydration, and malnutrition. Project activities are underway in 123 communities located in the departments of Chuquisaca, Potosi, and Tarija. The total estimated size of the priority population is 10,230. The principal activities include immunizations, oral rehydration therapy (ORT), growth monitoring and nutrition education, and hygiene education. Immunizations follow the Bolivian Ministry of Health (MOH) recommended schedule of BCG, DPT, Polio, and Measles. ORT includes both packets donated by UNICEF and home mix. In addition, project staff work with communities to provide iodized salt in an effort to prevent iodine deficiency diseases (IDD) prevalent in the highlands.

The activities depend primarily on project staff consisting of 18 health educators, three supervisors, and the project manager. The health educators are mostly nurses and health technicians while the supervisors are physicians. The role of the health educators is twofold. First, to provide immunization and growth monitoring services and health education to communities. And second, to work with communities to identify, select, and train community volunteers to assume responsibility for growth monitoring and health education activities. The supervisors provide technical and administrative support to the educators and volunteers.

The causes of childhood mortality are many and their relationships complex. To maximize the impact the health projects have on lowering childhood mortality, the important causes must be addressed. For this reason, RUBHEP is attempting to combine the typical child survival interventions with those related to safe water and sanitation. An important criterion for selecting communities to participate in the project is that they have a drinking water system which in many instances was built with assistance from CARE-Bolivia. By doing so, project staff anticipate RUBHEP will be more effective in the long run.

1. I discovered while working with similar vaccination campaigns that merely boiling needles and syringes is often not sufficient to kill bacteria. Sterilizing syringes by the boiling method resulted in abscesses which damaged the credibility of the program since mothers were afraid to bring their children back for subsequent immunizations. Once steam sterilization was initiated with the use of pressure cookers, there were no more reports of abscesses.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 118

Page 122: M&E Workshop Handout Manual

Handout 6.13

During my field visit, I observed that the health educators were using the boiling method to sterilize syringes and needles. 2. The health educators are spending 8 days every 3 months immunizing children under 4 years. This is approximately 15% of their time in the field. Their salary is about $250/month. On the other hand, MOH personnel receive $20/month and do the same work during the national vaccination campaigns. CARE/Bolivia could easily increase the cost-benefit ratio of cost to each child immunized by having each health educator train and supervise 5 - 10 community volunteers to help in the national vaccination campaigns. Futhermore, the Bolivian MOH has recently been trying to recruit and train volunteers for the vaccination campaigns. CARE/Bolivia, by training volunteers to immunize, would not only improve its cost-benefit ratio, but would be cooperating with the Bolivian MOH's new policy. 3. I observed health educators and spoke with several more who have been weighing children and having the mothers plot the weights on the growth curve without any sort of explanation as to why the children were or were not growing well. I believe there exists enormous educational potential in the growth monitoring component. 4. In the majority of the communities that I visited, no latrines existed. Therefore, the health educators defecate in the fields as do the rest of the people in the community. Nevertheless, the health educators are involved in teaching basic sanitation education. I believe that until materials are available for the construction of latrines in the community, the health educators could begin setting an example of good sanitation hygiene practices by building and using a latrine in the community where they live. 5. Two of the regional water and sanitation engineers told me that the criteria a community must meet in order to be considered for a water system are: a) a sufficient supply of water; b) acceptable water quality; c) topographical conditions conducive to building the water system and; d) a community water committee. The engineers went on to tell us that at times they help organize the village water committee if the other criteria are met. I believe this is precisely the problem. Maybe it is too tempting for water and sanitation engineers to quickly organize a village water committee in order to take advantage of an ample supply of good water and ideal topographical conditions so that a water system can be built. Once the water system has been built, the hastily organized village water committee is unable to maintain the system (see the WASH evaluation for CARE/Bolivia water systems). A community that is able to raise enough money or in-kind payment (fee for service as an indicator) would demonstrate a certain level of commitment and organization that might be better suited to maintain the water system once the CARE water and sanitation engineers have pulled out of the community. 6. I was able to sit in on 3 health education classes where health educators taught women in mothers' clubs. In each case, the health educator gave a 15 - 20 minute presentation on a child survival subject, asked if anyone had questions, then ended the session. There was no participation from the group. Based on research and my own experiences, I believe that adults learn best when they are actively involved in the learning process. I feel that a serious problem with the health education component of RUBHEP is the lack of skills the health educators have in actively involving the group in the learning process. 7. Studies have demonstrated that the participation of community volunteers in projects decreases proportionately to the amount of time the volunteers participate without compensation for their services. I have witnessed this phenomenon in the Dominican Republic, Nepal, and Guatemala. During my field visit I spoke with 6 health educators who have already lost 50% of their original community volunteers. The most common reason given by the community volunteers for leaving the project was no compensation for their work. The 50% attrition has occurred despite the supervision and support of the health

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 119

Page 123: M&E Workshop Handout Manual

Handout 6.13

educators. What will happen in 4 years when the project funds can no longer be paid to supervise and support the community volunteers? 8. CARE/Bolivia received 822,000 dollars for RUBHEP of which approximately 300,000 dollars is budgeted for salaries and personnel support. The rationale for the huge investment in personnel is that in 3 years the health educators will have been able to change enough of the target population's health related behaviors to make a substantial difference in childhood mortality and that there will be trained community health volunteers to carry on the project. Similar projects have shown that the time it takes to sustain a change in health related behavior often takes longer than 3 years. Then the change in behavior needs to be continuously reinforced. It appears that the projects will rely on the uncompensated community volunteers to sustain any change in behavior. For the reasons mentioned previously, I would not count so heavily on the community volunteers as long as they are not being compensated for their work.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 120

Page 124: M&E Workshop Handout Manual

Handout 6.14

Handout 6.14: DATA ANALYSIS

Source: Casley, Dennis J. and Krishna Kumar. 1988. The Collection, Analysis, and Use of Monitoring and Evaluation Data. Johns Hopkins University Press.

1. Exploratory Data Analysis Exploratory analysis is useful in and of itself in that it gives those with little or no formal training in statistics an opportunity to understand the data, which in turn improves their ability to make decisions based on the M&E system. For monitoring purposes, simple exploratory data analysis is usually all that is needed. Evaluations often require more sophisticated techniques, but even then exploratory analysis is a first step. Exploratory data analysis looks for simple structures and patterns in the data, and helps to determine if the data has errors. 1.1 Graphing

Graphs are used to detect possible patterns, not to present the results more simply.

Usually plotting the main variables against time or against each other will provide useful information.

Can assess “noise” in the data. Useful for detecting outliers.

1.2 Ordering of Data

Usually the ordering of cases is more or less random before data processing. If data is computerized, it can be easily ordered and measures of central

tendency can be easily calculated. Can take the form of a grouped or non-grouped frequency distribution. Look for whether the data is unimodal or bimodal, and whether it has long

tails. Can facilitate assessing central tendencies (mean, median, mode).

1.3 Dispersion

Simplest measurement is the range, but usually of limited use. Most common is the standard deviation of the distribution. This is the square

root of the variance. The variance is useful in informing one of the “noise” in the system being

studied. 1.4 Linear Relations

Useful to explore the linear relationshiop between two variables, in the form of y = a + bx. This is one of the basic statistical calculations known as least-squares regression.

You can use the quartile method to easily do this by hand. Order the data according to ascending values of x, calculate the means of the variables for the lower quartile and upper quartile, respectively. Plot these points, and draw a regression line through them. This is very quick to do and gives a reliable estimate of the true linear relationship.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 121

Page 125: M&E Workshop Handout Manual

Handout 7.1

Handout 7.1: PARTICIPATORY MONITORING AND EVALUATION

What? "Traditional" monitoring and evaluation is initiated from the top and carried out for the people. Participatory (PME) is carried out and belongs to the project or program participants (providers, partners, beneficiaries, and other interested parties). PME requires the involvement of project participants in: • deciding what to monitor and evaluate • selecting indicators for M&E • selecting data collection methods • processing data • analyzing data

• using PME information for their own purpose. Advantages and disadvantages of PME A participatory approach to M&E empowers people to manage their resources, therefore increasing control over their lives as well. PME helps achieve results while increasing people's understanding and ability to solve their own problems.

Advantages of PME: • examines relevant issues by involving key players in the evaluation design • promotes participants' learning about the program and its performance • Improves participants M&E skills • Enhances teamwork and builds a shared commitment to act on evaluation

recommendations. • Increases likelihood that M&E information will be used to improve performance.

Disadvantages of participatory evaluations • Be viewed as less objective because staff and beneficiaries that might have their own

interests participate in evaluations • Requires considerable time and resources to identify and involve a wide range of

stakeholders • Takes participating staff away from ongoing activities

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 122

Page 126: M&E Workshop Handout Manual

Handout 7.2

Handout 7.2: PARTICIPATORY M&E 2 How to encourage participation in M&E • use active rather then passive methods • start with monitoring and evaluation activities that are of interest to all • use small groups • use simple graphics and tables • facilitate access to more information • exchange ideas and information with group members about:

the elements of the M&E system who should be responsible for each task how the data will be used where the system will be physically based when to begin and to end the process Examples of data collection activities that encourage participation • creating a village or community map

• educational games and role plays in groups • participatory monitoring wall charts • group field visits and study tours

• use of case studies • group presentation of important findings • use of stories, drama to present findings • group analysis of research reports

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 123

Page 127: M&E Workshop Handout Manual

Handout 8.1

Handout 8.1: DISSEMINATION AND UTILIZATION

1. Define the audience the evaluation is intended for (users). 2. Clearly describe the purpose of the evaluation and what approach, model, or framework

was used to provide direction? 3. Make use of tables and matrices to summarize qualitative and qualitative data analysis 4. Avoid unsupported statements and recommendations 5. Provide practical and constructive comments 6. Include a separate section on lessons learned.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 124

Page 128: M&E Workshop Handout Manual

Handout 8.2

Handout 8.2: POSSIBLE DISSEMINATION STRATEGIES

Formal written evaluation reports (various lengths and complexities).

Workshops with counterparts and/or staff (action plan as the output).

Information pamphlet with evaluation highlights.

Case studies.

Specific reports on Lessons Learned.

Video or slide presentations.

Meetings with donor/counterparts/staff.

Community meetings.

Formal presentations of information in graphs and charts.

One on one discussions with stakeholders and what to do next.

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 125

Page 129: M&E Workshop Handout Manual

Supplemental Handout 1

Supplemental Handout 1: PARTICIPATORY METHODS

By Jim Rugh NAMES FOR PARTICIPATORY / RAPID APPRAISAL APPROACHES

PRA = Participatory Rural Appraisal

FPR = Farmer Participatory Research PALM = Participatory Analysis and Learning Methods PAR = Participatory Action Research PRM = Participatory Research Methodology PRAP = Participatory Rural Appraisal & Planning PUA = Participatory Urban Appraisal RAP = Rapid Assessment Procedures RFSA = Rapid Food Security Assessment RRA = Rapid Rural Appraisal Source: Jules Pretty, Irene Guijt, John Thompson & Ian Scoones A Trainer’s Guide for

Participatory Learning & Action, IIED (1995)

Rapid rural appraisal: written about by Robert Chambers (late 1970s); reaction to biases inherent in the “rural development tourist” approach, which tended to hide the worst poverty and deprivation. Sought to enable outsiders to gain insight and information from rural people about rural conditions in a cost-effective and timely manner. Applied anthropology: (1980s) useful in helping development professionals to appreciate better the richness and validity of rural people’s knowledge; benefits of unhurried participant observation and conversations and the importance of attitudes, behavior and rapport.

Agroecosystem analysis: developed by Gordon Conway (1987); draws on systems and ecological thinking; uses transects, informal mapping and diagramming, and the use of scoring and ranking to assess innovations.

Sources: Andrea Cornwall, Irene Guijt and Alice Welbourn (1993); Robert Chambers (1992), cited in A Trainer’s Guide to Participatory Learning & Action, IIED (1995)

Venn organizational diagram

PME = Participatory Monitoring & Evaluation

PLA = Participatory Learning & Action

Brief history of participatory approaches Activist participatory research: inspired by Paulo Freire (1968); uses dialogue and joint research to enhance people’s awareness and confidence and to empower them to action.

Field research on farming systems: (1980s) recognized the rationality of small and poor farmers and their activities as experimenters.

Some examples of PRA tools Semi-Structured Interviews Mapping Transect walks Seasonal calendars Diagramming & Visualizations

Flow diagrams for systems

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 126

Page 130: M&E Workshop Handout Manual

Supplemental Handout 1

Ranking & Scoring Preference ranking Wealth ranking

Do participatory approaches meet the four criteria used by conventional researchers?

A typology of participation

7. Self-mobilization

• Project design

• Baseline study

• Data analysis

• Making decisions, action plans

Matrix of Criteria Criteria for trustworthiness

• How can we be confident about the “truth” of the findings (internal validity)?

• Can we apply these findings to other contexts or with other groups of people (external validity)?

• Would the findings be repeated if the inquiry were replicated with the same (or similar) subjects in the same or similar context (reliability)?

• How can we be certain that the findings have been determined by the subjects and context of the inquiry, rather than the biases, motivations and perspectives of the investigators (objectivity)?

Source: Y.S. Lincoln and E.G. Guba, Naturalistic Inquiry, Sage (1985)

1. Passive participation 2. Participation in information giving

3. Participation by consultation 4. Participation for material incentives 5. Functional participation

6. Interactive participation

Source: J.N. Pretty, Regenerating Agriculture: Policies and practice for Sustainability and

self-reliance (1994), adapted from Adnan et al (1992)

Applying participatory techniques to M&E processes Review some of the techniques we’ve learned this week. Which of them lend themselves to participation by beneficiaries?

• Needs assessment

• Planning M&E system

• Monitoring system • Conducting evaluations

• Quantitative methods • Qualitative methods • Communicating findings

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 127

Page 131: M&E Workshop Handout Manual

Supplemental Handout 1

Thinking participatively 1. How can we recognize good practice in participatory development? 2. What external conditions make it difficult to adopt participatory methodologies? 3. Under what conditions and in what situations are participatory approaches appropriate for CARE projects?

4. What internal conditions in CARE encourage or discourage innovation and adaptation of participatory approaches?

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 128

Page 132: M&E Workshop Handout Manual

Supplemental Handout 2

Supplemental handout 2: RATE EXERCISE (Health/population examples)

2. Last year there were 60 deaths of uRate?

6. Among children 12-23 months of aWhat is the percentage that were t

10. Rate Exercise (DZADP examples)

DEMOGRAPHICS

1. Last year there were 47 deaths of c

Mortality Rate?

3. There are 1,320 women of child-betraditional). What is the Contracep

4. Of 12-23 month olds, 255 were immcoverage rate?

5. Of 300 mothers with children undepast 2 weeks, 200 said that they ha100 said they used ORT during the

7. Of the 1,660 under-five children 25weight. What percentage of childre

8. There are 200 men who have had and who do not want another childprevalence rate?

9. 500 adults were referred to clinics percentage of those with STD repo

NUMBERS GIVEN IN PROPOSAL

Districts Divisions - Agrarian Services Centres (ASC) (at least 1 per Division)

Average Farmers per FO Estimated Total number of farmers

Other numbers given:

Farmer Organizations (FO) per ASC

Total farmers cultivating under irrigatiothe DZADP Districts

FemaMal

MLivC

ChildrP

CARE 1997 M&E Workshop Series

Population: 8,330 Families: 990

Total females: 4,000 les between 15-49: 2100

es between 15-49: 2000 ales above 15: 2404 e births last year: 375

hildren under 5: 1,660 en 12-23 months old: 323 regnant women: 134

nder-five children. What was the Under-5 Mortality

ge, 34 were treated with chloroquine for malaria. reated correctly?

hildren under one year of age. What was the Infant

aring age who are using contraceptives (modern and tive Prevalence Rate? unized. What is the under-one year immunization

r 2 who reported that their child had diarrhea in the ve used ORT (oral rehydration therapy) at some time. past 2 weeks. What is the ORT usage rate?

0 were weighed during the past 3 months. 200 gained n gained weight?

vasectomies or who report using condoms regularly in the next year. What is the contraceptive

for STD (sexually transmittable diseases). What is the rting for treatment?

Total Phase I target Phase II target 6 6 6

65 8 16

30 5 10 42 42 42 10,080 40,320

78,000

n in

Caldwell and Sprechmann 129

Page 133: M&E Workshop Handout Manual

Supplemental Handout 2

1. Output for Immediate Objective (IO) #2: “Farmer Field Schools established: 24,000

farmers trained in the use of improved agricultural practices, post harvest methodologies, livestock and agroforestry.” What % of targeted farmers to receive training at Farmer Field Schools?

2. IO #2: “Improved farming systems and post harvest methodologies adopted by 80% of farm households.” How many farm households will need to adopt these methodologies to fulfill this objective?

3. Assuming a 20% drop-out rate (farmers not applying what they were taught), how many farmers will need to be trained in Farmer Field Schools in order to meet IO #2?

4. IO #1.3: “40% of farm households have access to improved irrigation infrastructure.” What % of farmers in the target area already have access to irrigation infrastructure?

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 130

Page 134: M&E Workshop Handout Manual

Supplemental Handout 3

Supplemental handout 3: DEVELOPING EFFECTIVE EVALUATION PLANS

By Jim Rugh Introduction Objective 1: Participants will be given a very brief overview with basic experimental designs Objective 2: Participants will develop criteria for determining appropriate research/

evaluation designs for CARE projects

Questions to keep in mind: • What is the role of research in the design of a CARE project? • Under what conditions are more sophisticated research designs warranted? • How do these considerations influence the M&E plans for a project?

Agenda Step 1: Review different experimental/research designs

Step 2: Develop criteria for determining appropriate design for a project’s evaluation plan. 1. Overview of research designs

• Purposes of evaluation/research design: • To “prove” impact attributable to a particular intervention • To test hypothesis that intervention’s outputs lead to impact

What are the elements of research design?

Vocabulary: Observation = O Intervention being tested = X Alternative program (or no intervention) = C Pretest ... Posttest (baseline ... final evaluation) Experimental (project) group = E> Control (comparison) group = C> Randomization = R Evaluation Research Designs NO CONTROL GROUP No baseline X O Before + After O X O WITH CONTROL GROUP Posttest Only E> X O C> O Pretest + E> O X O Posttest C> O O TIME SERIES Single Group O O O X O O O Control Group O O O O O O

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 131

Page 135: M&E Workshop Handout Manual

Supplemental Handout 3

Classical Experimental Designs True Control Group Pretest-Posttest T i m e 1 (pre) 2 (post) Experimental Group E> O X O Population > R Control Group C> O C O Time Series with Non-Equivalent Control (Comparison) Group T i m e 1 2 3 4 Experimental Group E> O X O X O X O Control Group C> O C O C O C O Major Threats to the Implementation of Experimental Designs

• Confounds: extraneous effects which happen to one group (either E- or C-group) but not the other, and which could influence the outcome measures.

• Large sample size provides some protection • Account for other influencing factors • Contamination: supposed “control” group uses methods or materials being tested on

E-group. • Difficult to isolate control group • Attrition: people drop out of either program or control group for any reason. • Make sample size large enough to compensate • Differences between E- and C-groups in time spent on the program (intervention).

Develop criteria for determining appropriate design for a project’s evaluation plan What are some of the considerations? For what reasons should experimental or quasi-experimental research designs be used? When might they not be necessary? Suggested criteria for determining appropriate design for a project’s evaluation plan

• What are some of the considerations? • Costs (funds, staff time or outside expertise required) • Skills available (among staff or outsiders) to do surveys, analyze results • Availability of staff (or outsiders) • Participant’s time and willingness to cooperate • Non-participants’ (control group’s) time and willingness to cooperate • How to keep control group “pure”? • Ethics of using control group

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 132

Page 136: M&E Workshop Handout Manual

Supplemental Handout 3

• Accessibility to area to do survey (security considerations) • Intervention time-line (i.e. length of project) • For what reasons should experimental or quasi-experimental research designs be

used? • When it is necessary to prove impact and attribution • Pilot project to serve as model for wide multiplication • If new, untested technology/intervention (test alternatives) • To test hypothesis (i.e... correlation between intervention’s outputs and impact) • If training and building capacity of partners in evaluation research is a goal of project • When might they not be necessary? • Proven intervention, impact previously ascertained • Reliable, valid and relevant secondary data available • Only need to verify that implementation and outputs complies with standards • Short-term (i.e. emergency) • Questionable security situation • When sophisticated research would be “nice” but not “necessary”

POSSIBLE EVALUATION DESIGNS WHEN A CONTROL IS NOT APPROPRIATE POSSIBLE EVALUATION DESIGNS WHEN THERE IS NO BASELINE

CARE 1997 M&E Workshop Series Caldwell and Sprechmann 133