1 april 2009

20
Knowledge of and attitudes towards impact evaluations amongst senior managers in South Africa’s Department of Social Development 1 April 2009

Upload: cricket

Post on 23-Jan-2016

43 views

Category:

Documents


0 download

DESCRIPTION

Knowledge of and attitudes towards impact evaluations amongst senior managers in South Africa’s Department of Social Development. 1 April 2009. Evaluations in South Africa’s public management framework 1. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: 1 April 2009

Knowledge of and attitudes towards impact evaluations amongst senior

managers in South Africa’s Department of Social Development

1 April 2009

Page 2: 1 April 2009

Evaluations in South Africa’s public management framework 1

• The context: In some ways, SA’s public service is best understood as a troubled teenager with a traumatic past. Only 15 years old, it has many issues, challenges and blind spots but is also dynamic, creative and exciting.

• The Public Service: It comprises national level departments, which undertake policy development and oversight functions,. Including M&E) and provincial departments, which do service delivery. Local governments are a separate, disparate and very challenged sphere of government not currently covered by the legislative definition of the public service.

Page 3: 1 April 2009

Evaluations in South Africa’s public management framework 2

• Reform: Public service reform has been very finance-driven, with the National Treasury providing “tough love leadership”, resulting in a focus on finacial compliance and a neglect of non-financial matters. Non-financial public management (especially performance management) is brittle and underdeveloped.

• Policy: A GWM&E policy is in place, dominated by NT’s FPI, with Stats SA’s SASQAF trailing far behind. The third leg, the evaluation component, has not yet been addressed in policy terms and is seriously underdeveloped and neglected in practice.

Page 4: 1 April 2009

M&E at DSD 1

• The DSD is responsible for developing, monitoring and evaluating policy with regard to three programme areas: Social Security, Welfare services and Integrated Development. It has some (but very few) implementation responsibilities.

• Its human capacity is predominantly administrative and financial.

• M&E is done in the policy making units as well and is supported by a dedicated M&E unit headed by a Chief Director.

• The M&E unit is a Chief Directorate in the Operations Branch (one of 7) and has three Directorates addressing institutional and service delivery monitoring; evaluations and assessments and strategic information.

Page 5: 1 April 2009

M&E at DSD 2

• The Unit is around 3 years old and is still in its establishment phase, focusing extensively on systems and processes design and development; awareness raising and capacity development and related matters.

• The Unit has in the last year developed an M&E Strategy, a compendium of indicators and a comprehensive reporting system.

• While previously operational, the Evaluation directorate has only recently appointed a dedicated Director and has been functioning in a somewhat responsive manner.

Page 6: 1 April 2009

About this project

• The exercise on which this presentation is reporting was undertaken as part of the development of a Medium Term Evaluation Plan, the purpose of which will be link evaluations to the DSD’s MT Expenditure Plan.

• The project also sought to create awareness of the revitalization of the evaluation function, to provide the M&E Unit with an understanding of attitudes towards and knowledge of evaluation amongst senior managers (so that our evaluation strategy could take these issues into account) and to update M&E personnel on what is being done in the area throughout the department.

Page 7: 1 April 2009

Methodology

• Initially we proposed semi-structured interviews with 40 managers but this proved impractical.

• The project entailed undertaking hour long interviews with 20 senior managers throughout the Department, including the CEO and the COO.

• Study participants were purposively selected from the welfare, integrated development, social security and supporting programmes.

Page 8: 1 April 2009

Findings - Knowledge

• Mostly the term “IE” was understood to be about the extent to which a policy or intervention has made a difference in the lives of its beneficiaries or added significant value. “It is an evaluation method used to check the

consequences/effects of a particular policy. It focuses on checking if one received the intended or the unintended results”

“It explains itself. Evaluate the impact. It is not easy to measure poverty. We need to look at the as is situation and the information after the intervention….”

Page 9: 1 April 2009

Findings – Knowledge cont.• Just over half of respondents have a very vague understanding of the

difference between impact and other evaluations. “ Yes I can differentiate evaluations by immediate,

intermediate types and impact is the long term one”• Very few had a moderate to advanced understanding of the difference

between impact evaluation and other types of evaluation. • These respondents were those who had undertaken a full module on

M&E as part of their postgraduate studies.“The first one is efficiency evaluation which looks at whether

what you are doing is reaching the desired number of target. The second one is evaluability assessment which looks at whether the intervention complies with evaluation principles. The third one is Process evaluation which focuses on evaluating process itself”.

Page 10: 1 April 2009

Findings – Knowledge cont.

• Most respondents had only a vague understanding of when IE should be undertaken and just a fifth of them understood that it needs to be linked to policy and planning cycles.

“ Impact evaluations should be factored at the beginning of policy making to ensure that indicators for measurement are developed very early in the process”

• Almost all of them could distinguish between Monitoring and Evaluation but could not tell the difference between the different kinds of evaluation although they were familiar with their utilization in management processes.

Page 11: 1 April 2009

Findings – Attitude• Almost all respondents felt that evaluations are or could be useful in

the department.

• Just over half felt that evaluations should be done by both internal and external evaluators.

• A quarter preferred for evaluations to be undertaken externally exclusively.

– “You cannot be a referee and a player at the same time you need an external person for objectivity”

• Some (but very few) respondents felt that evaluations should be conducted by internal people only.

Page 12: 1 April 2009

(Findings – Attitude cont.

• Just over half the respondents felt that the programme should lead evaluations with guidance or support from the M&E chief directorate.

• A quarter of the respondents felt that the M&E Chief Directorate should lead the process.

Page 13: 1 April 2009

Findings – Practices• Just under half of the respondents have no evaluations planned while a

third have evaluations planned and a fifth have an evaluation under way.

• Just over of half the respondents mentioned that they use information/evidence from M&E extensively. A quarter said that they don’t use any evidence from M&E

• A quarter of the respondents felt that their project is achieving some of the intended results. Around half of these attributed the results they observed to their interventions whilst the rest felt that it is a combination of internal and external factors (e.g. other role players like various government intervention, NGO’s, private sector and community involvement that caused the intended effects).

• A quarter of the respondents said that they don’t know if their interventions contributed to the results achieved or that it would be premature to say so because they have not done any research to prove a link.

Page 14: 1 April 2009

Findings – Practices cont.

• Around half said that they have evidence to prove that their interventions make a difference. .

• The following were mentioned as types of evidence usable for attribution purposes

– Research evidence

– Documentaries

– Reviews performance information (Output)

– “Because they are the only actors in the area”.

Page 15: 1 April 2009

Note!

• The study was conducted only among senior managers at the department and therefore cannot be considered as applicable to provincial departments.

• The findings on knowledge of evaluations in particular impact evaluations corroborates our expectations.

Page 16: 1 April 2009

Implications of the study• The knowledge that is held on evaluations was

largely gained from basic formal or informal means e.g short courses, self study etc.

• M&E has become a buzz word in the public service enjoying full cabinet backing, downloads and media advertisement for M&E courses from institutions of higher learning and external service providers.

Page 17: 1 April 2009

Knowledge

Evaluation and monitoring courses offered to senior managers need to be contextualized to meet their specific needs and the type of programme they manage to ensure effective and efficient planning, implementation and use of evaluation results in DSD.

Page 18: 1 April 2009

Attitudes

• Factors that could inhibit demand for evaluations and the use of their findings:• Political interference• Legislating impact evaluation • Ownership of evaluation projects • Capacity building of various managers• Evaluating programmes in an integrated manner not in

silos• Decentralization of M&E functions and making

evaluations an integral part of managers performance contract

Page 19: 1 April 2009

In moving forward • To develop a culture of evaluations:

• Capacity building strategy is critical• Ensure that the evaluation strategy outlines the

relations between the various stakeholders• Explore innovative information techniques• Resuscitate M&E forum• Respect content expertise and combine it skillfully

with M&E • Advocate for rigorous evaluations and ensure that

all new policies incorporate evaluations.

Page 20: 1 April 2009

THANK YOU!!