capacity development & learning in evaluation

22
Capacity development & learning in evaluation Uganda Evaluation Week 19 th to 223 rd May 2014

Upload: dasan

Post on 10-Jan-2016

42 views

Category:

Documents


6 download

DESCRIPTION

Capacity development & learning in evaluation. Uganda Evaluation Week 19 th to 223 rd May 2014. Contents. What do we mean by evaluation capacity? Links to managing for results Evidence from organisations who have introduced results management systems The challenge of evaluation culture - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Capacity development & learning in evaluation

Capacity development & learning in evaluationUganda Evaluation Week19th to 223rd May 2014

Page 2: Capacity development & learning in evaluation

Contents

1. What do we mean by evaluation capacity?2. Links to managing for results3. Evidence from organisations who have

introduced results management systems4. The challenge of evaluation culture5. The experience of DFID6. Results system or results culture7. Conclusions

2

Page 3: Capacity development & learning in evaluation

What do we mean by evaluation capacity?

3

Individual

Organisation

Enabling environment

systems

Promoting accountability, transparency

& learning

Question policies and

practices

Capacity empowers

stakeholders to …

Building blocks?A process of good governance?

Both ?Building blocks

Process

Leads to demand,

supply and use of

evaluation

Page 4: Capacity development & learning in evaluation

Individual Level

4

Individuals’ knowledge, skills and competences

• Senior management capacity for strategy & planning• At mid-management level, understanding of the role of

evaluation as a tool for effectively achieving development results.

• Behavioural independence and professional competences of those who manage and/ or conduct evaluations.

Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country ledmonitoring and evaluation systems.

Page 5: Capacity development & learning in evaluation

Institutional Level

5

The institutional framework

• Evaluation policy exists and is implemented.• An evaluation unit with a clearly defined role, responsibilities.• Functional Quality Assurance system.• Independence of funding for evaluations.• System to plan, undertake and report evaluation findings in an

independent, credible and useful way exists.• Open dissemination of evaluation results.• Knowledge management systems in support of the evaluation function

exists and is used.

Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country led monitoring and evaluation systems.

Page 6: Capacity development & learning in evaluation

Enabling Environment

6

Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country ledmonitoring and evaluation systems.

A context that fosters the performance and results of individuals and organizations

• Functioning National voluntary organisation for professional evaluation.

• National policy on evaluation• Strong evaluation culture.• Public administration committed to transparency and managing for results

and accountability.• Political will to institutionalize evaluation.• Existence of adequate information and statistical systems.• Legislation and/or policies to institutionalize monitoring and evaluation

systems

Page 7: Capacity development & learning in evaluation

All or nothing?

7

Individual Skills are lacking at all levels

Demand doesn’t emerge

OrganisationLack of

structures and processes

Individuals’ skills are not applied

EnvironmentAbsence of

legislation & supportive

policies

Lack of clarity in roles and

responsibilities

Neglect of… Means that …

Page 8: Capacity development & learning in evaluation

Results management framework

8

Strategic Results Framework• Objectives, indicators &

strategy• Roles & responsibility

Programme Results Framework• Results chain & theory of change• Align with strategic framework• Performance indicators

Credible measurement & analysis• Measure & assess results• Assess contribution to strategic

objectives

Credible performance reporting• Relevant, timely & reliable

reporting

Use results to improveperformance• Adjust the programme• Develop lessons & good practices

Source: Itad Ltd, adapted from ‘Managing for Development Results Handbook’

Page 9: Capacity development & learning in evaluation

Expectations for managers

Planning• Understanding the theory of change• Setting out performance expectations

Implementation• Measure and analyse results and assess

contribution

Decision-making & learning

• Deliberately learn from evidence and analysis

Accountability• Reporting on performance achieved against

expectations

9

Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4p

Page 10: Capacity development & learning in evaluation

Evaluations of RBM• UNDP (2007)• Significant progress was made on: on sensitising staff to results; and on creating

the tools to enable a fast and efficient flow of information. • Managing for results has proved harder to achieve. Stronger emphasis on resource

mobilisation and delivery, a culture fostering a low level of risk-taking, weak information systems at country level, the lack of clear lines of accountability and the lack of a staff incentive structure all work against building a strong culture of results.

• Finland (2010)• Tools and procedures are comprehensive and well established . Good standards of

project design are not consistently applied.• Low priority given by managers to monitoring, reporting and evaluation. Most

monitoring reports were activity-based or financial and there was little reporting against logframes.

• Managing for results depends not only on technical methodology, but also on the way the development cooperation programme is organised and managed. Finland’s approach is characterised as being risk-averse; few examples of results being used to inform policy.

10

Page 11: Capacity development & learning in evaluation

‘Can we demonstrate the difference that Norwegian aid makes?’ Overall conclusion• Although there are some elements of good

foundations for better results measurement, current arrangements lack the strength of leadership, depth of guidance and coherence of procedures necessary for effective evaluation of Norwegian aid.• As a result of a lack of incentives, poor processes

for planning and monitoring grants, and weaknesses in the procedures for evaluations, this cannot be demonstrated.

ITAD Ltd (2014) Can we demonstrate the difference that Norwegian Aid makes? Evaluation of results measurement and how this can be improvedAvailable at: http://www.norad.no/en/tools-and-publications/publications/evaluations/publication?key=412342

11

Page 12: Capacity development & learning in evaluation

What is an evaluative culture?• An organization with a strong evaluative culture:

12

Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4p

engages in self-reflection and self-examination:• deliberately seeks evidence

on what it is achieving, such as through monitoring and evaluation,

• uses results information to challenge and support what it is doing, and values candour, challenge and genuine dialogue;

engages in evidence-based learning:• makes time to learn in a

structured fashion,• learns from mistakes and

weak performance, and• encourages knowledge

sharing;

encourages experimentation and change:• supports deliberate risk

taking, and• seeks out new ways of

doing business.

Page 13: Capacity development & learning in evaluation

UK Department for International Development• A 2009 study into DFID’s evaluation reports found: • Weaknesses were systemic in nature linked to top management,

requiring a significant change in culture. • A key overarching problem was an unduly defensive attitude to

the findings from evaluation.• Other detailed recommendations called for:

• evaluability issues to be considered at the planning stage; • for training of staff; • for strengthening the evidence base that underpins evaluations; and • for requiring managers to make a formal response to evaluations.

13

Roger C Riddell (2009) The Quality of DFID’s Evaluation Reports and Assurance Systems. IACDI (The Independent Advisory Committee on Development Impact)

Page 14: Capacity development & learning in evaluation

14

Five features of DFID’s approach combine to justify high ratings :

Continuity of guidance from planning a project Business Case; quality assurance arrangements; evaluation policy; and evaluation training materials, with some cross referencing.

Recognition that a clear logic model and results based on prior evidence strengthens the quality of project design rather than being a formality to complete a project proposal.

Evaluability is assessed from several perspectives: expected impact and outcomes; strength of the evidence base; theory of change; and what arrangements are need to measure, monitor and evaluate progress and results.

Documentation includes detailed descriptions, training or self-briefing materials and examples for staff to follow.

There is consistency of message across planning guidance, appraisal and approval, with a detailed checklist for quality assurance.

DFID – Highly rated for evaluability

Source: ITAD Ltd (2014) Can we demonstrate the difference that Norwegian aid makes? An evaluation of results measurement and how this can be improved. Annex 5 (available on www.norad.no/evaluation)

Page 15: Capacity development & learning in evaluation

DFID – Embedding

• Significant increase in the quantity of evaluations commissioned increased from 12 per year, prior to 2011, to an estimated 40 completed evaluations in 2013/14.

• The embedding process has increased the actual and potential demand. • Decision to evaluate now made during the preparation of BCs. Good for programme

performance, but a lack of a broader strategic focus. • Depth of this capacity is less than required with 81% accredited to date only at the

foundation or competent level • Gaps in capacity relate to: • understanding why and when to commission evaluation • enhancing the contexts of evaluations and engaging stakeholders appropriately • selecting and implementing appropriate evaluation approaches while ensuring

reliability of data and validity of analysis • reporting and presenting information in a useful and timely manner.

• Need to:• strengthen evaluation governance; develop a DFID evaluation strategy 15

Source DFID (2014) Rapid Review of Embedding Evaluation in UK DFID

Embedding: Business Cases and Evaluation advisorsSince 2011: 37 advisers in a evaluation role;

150 staff accredited in evaluation and 700 people receiving basic training.

Page 16: Capacity development & learning in evaluation

Core quality model

Quality assurance

Standards

Perfor-mance

Technical guidance

How to do it

Training

ProgrammeProcedures, roles & responsibilities

Results & evaluability

16

Page 17: Capacity development & learning in evaluation

DFID LearningDFID is the highest performing civil service main department for ‘learning and

development’. (Cabinet Office survey)• Evaluations are a key source of knowledge. • 40 evaluations completed in 2013-14. • 425 evaluations either underway or planned as at July 2013.

• Annual, mid-term and project completion reviews are an under-utilised resource. • Staff find it hard to identify what is important and what is irrelevant.• DFID’s ability to influence has been strengthened by its investment in knowledge.Issues:• Workload pressures restricts making time to learn. • Staff often feel under pressure to be positive about assessing both current and

future project performance. • Knowledge is sometimes selectively used to support decision-making.• Positive bias links to a culture where staff have often felt afraid to discuss failure.• Many evaluations are not sufficiently concise or timely to affect decision-making. 17

Source: Independent Commission for Aid Impact (2014) How DFID Learns

Page 18: Capacity development & learning in evaluation

UK – National Audit Office

Findings• Significant spend• Coverage incomplete• Rationale for what the

government evaluates is unclear.

• Evaluations often not robust enough to reliably identify the impact.

• Learning not used to improve impact and cost-effectiveness.

Recommendations

18

£44m spent on government evaluation in 2010-11Estimated 102 FTE staff working on evaluation in the government

• Plan evaluation when designing all new policies.

• Design policy implementation to facilitate robust evaluation.

• Departments to make data available to independent evaluators for research purposes.

Source NAO (2014) Evaluation in Government

Page 19: Capacity development & learning in evaluation

Results system or results culture?

19But these should not be mistaken for an evaluative culture. Indeed, on their own, they can become a burdensome system that does not help

management at all.

Many organizations have systems of results

A results-based planning system with results frameworks for programmes.

Results monitoring systems in place generating results data.

Evaluations undertaken to assess the results achieved by an evaluation unit.

Reporting systems in place providing data on the results achieved.

Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4p

Page 20: Capacity development & learning in evaluation

Measures to foster an evaluative culture

20

Leadership • Demonstrated senior management leadership and commitment• Regular informed demand for results information• Building capacity for results measurement and results management• Establishing and communicating a clear role and responsibilities for results

management

Organisational support structures • Supportive organizational incentives• Supportive organizational systems, practices and procedures• An outcome-oriented and supportive accountability regime• Learning-focused evaluation and monitoring

A learning focus • Building in learning• Tolerating and learning from mistakes

Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR

Page 21: Capacity development & learning in evaluation

Conclusions – taking a positive view• Evaluation only one source of information alongside

research and implementation experience. ECD needs to inform how these work together.

• Quality evaluation is built on quality planning. ECD needs to be linked to better planning systems.

• Technical skills are necessary but are not sufficient. • Effective evaluation will be determined by the culture

and incentives in the organisation. • ECD is a journey, not a destination. Systems are not

static; they need continual review, learning and revision. There is no simple solution but rather systems need to be introduced, used, tested, reviewed and then updated in a rolling cycle.

21

Page 22: Capacity development & learning in evaluation

END 22