public policy evaluation

80

Upload: dayyanealos

Post on 28-Dec-2015

68 views

Category:

Documents


5 download

DESCRIPTION

Public Policy Evaluation

TRANSCRIPT

Page 1: Public Policy Evaluation
Page 2: Public Policy Evaluation
Page 3: Public Policy Evaluation

Basic Concepts inMonitoring and Evaluation

February 2008

Published in the Republic of South Africa by:

THE PUBLIC SERVICE COMMISSION (PSC)

Commission House

Cnr. Hamilton & Ziervogel Streets

Arcadia, 0083

Private Bag x121

Pretoria, 0001

Tel: (012) 352-1000

Fax: (012) 325-8382

Website: www.psc.gov.za

National Anti-Corruption Hotline Number:

0800 701 701 (Toll-Free)

Compiled by: Branch Monitoring and Evaluation

Distributed by Directorate: Communication and Infomation Services

ISBN: 978-0-621-37612-8

RP: 14/2008

Page 4: Public Policy Evaluation

ii

FOREWORD iv

CHAPTER 1 INTRODUCTION 1

1.1 The purpose of this document 2

1.2 Intended audience 2

1.3 Definition of Monitoring and Evaluation 3

1.4 Importance of Monitoring and Evaluation 4

1.5 Purposes (and uses) of Monitoring and Evaluation 4

1.6 Outline of this document 6

CHAPTER 2: THE CONTEXT OF MONITORING AND EVALUATION 7

2.1 The Context of the Developmental State 8

2.2 Location of M&E in the policy process 8

2.3 Location of M&E in the planning process 10

2.4 The Government-wide M&E System 11

2.5 Important institutions with a role in M&E 13

2.5.1 Departments at the centre of government on the national level 14

The Presidency 14

National Treasury 14

Department of Public Service and Administration (DPSA) 14

Department of Provincial and Local Government (DPLG) 15

Statistics South Africa (Stats SA) 15

South African Management Development Institute (SAMDI) 15

2.5.2 Departments at the centre of government on the provincial level 15

2.5.3 Line departments 15

Page 5: Public Policy Evaluation

iii

2.5.4 Constitutional institutions 15

Public Service Commission (PSC) 16

Auditor-General 16

Human Rights Commission 16

CHAPTER 3: EVALUATION PERSPECTIVES 17

3.1 The idea of evaluation perspective 18

3.2 The Balanced Scorecard of Kaplan and Norton 19

3.3 Programme performance perspective 20

3.4 Financial perspective 20

3.5 Governance perspective 21

3.6 Human Resource Management (HRM) perspective 21

3.7 Ethics perspective 22

3.8 The perspectives adopted by National Treasury Guidelines 23

CHAPTER 4: VALUES 254.1 The value basis of monitoring and evaluation 26

4.2 Deriving standards of performance from values 27

4.3 The values and principles governing public administration 28

4.4 The Eight Principles of Batho Pele 34

4.5 Other concepts/ principles expressing some dimension of public service performance 35

CHAPTER 5: EVALUATING PROGRAMME PERFORMANCE 38

5.1 Programme evaluation 39

5.2 Logic models 42

5.3 Results-Based Management 44

5.4 Theory-based evaluation 44

Page 6: Public Policy Evaluation

iv

CHAPTER 6: APPLYING THE CONCEPTS 47

6.1 Focusing monitoring and evaluation 48

6.2 Designing monitoring frameworks 48

6.3 Framing evaluation questions 51

6.4 Examples of monitoring and evaluation of different dimensions 54

CHAPTER 7: SUMMARY 60

LIST OF SOURCES CONSULTED 64

INDEX 68

Figures

1 The policy life cycle 9

2 Planning and review cycle 10

3 Policy frameworks of the Government-wide Monitoring and Evaluation System 12

4 Components of the logic model 42

5 Example of an outcome chain 45

Tables

1 Deriving indicators and standards from values 27

2 Examples of objectives and indicators according to the logic model 50

3 Examples of evaluation questions: Housing programme 52

4 Types of monitoring and evaluation in relation to evaluation perspectives and values 55

Boxes

1 Example of a theory underlying causal relationship between outputs and outcomes 29

Page 7: Public Policy Evaluation

v

2 Examples of evaluative findings with regard to the efficiency of programmes 30

3 Examples of evaluative findings with regard to the effectiveness of programmes 36

4 Examples of evaluative findings with regard to the sustainability of programmes 37

5 Examples of evaluative findings with regard to the scale of engagement of programmes 37

6 Examples of evaluative findings with regard to the targeting of programmes 40

7 Illustrations of types of evaluations: Housing programme 44

8 Examples of a theory underlying the causal relationship between outputs and outcomes 45

9 Illustrations of types of evaluations: Housing Programme 58

Page 8: Public Policy Evaluation

vi

The growth of Monitoring and Evaluation (M&E) units in government, together with an increased supply of M&E expertise from the private sector, calls for a common language on M&E. M&E is a relatively new practice, which tends to be informed by varied ideologies and concepts. A danger for government departments is that these diverse ideological and conceptual approaches can exacerbate confusion and misalignment. The standardisation of concepts and approaches in government is particularly crucial for the enhancement of service delivery.

The PSC’s mandate requires of it to monitor and evaluate the organization and administration, and the personnel practices, of the Public Service. Taking this mandate and the need for a common understanding of concepts and approaches into account, the PSC decided to produce this text on basic M&E concepts.

A very basic question asked when a monitoring system must be developed or when an evaluation is planned is: What to monitor or evaluate, that is, what should the focus of the monitoring or the evaluation be? This document tries to answer this basic question by introducing concepts and frameworks.

Evaluation involves a value judgement. Many of the concepts discussed in the document have the status of values. The PSC has the specific constitutional responsibility to promote the application of these values in the Public Service.

This document is by no means definitive, but the PSC hopes that it will contribute to better understanding and enriched debate about the utility of M&E as a tool for improving the performance of the Public Service. We hope that it will fill the gap that currently exists for an accessible document that caters for managers in the Public Service, whilst also providing a common point of reference to the more advanced practitioner. It is hoped that readers will feel compelled to delve more deeply into the discipline.

I trust that this document helps you to deepen your interest and understanding of monitoring and evaluation.

Yours sincerely

PROFESSOR STAN S SANGWENICHAIRPERSON: PUBLIC SERVICE COMMISSION

Foreword

Page 9: Public Policy Evaluation

Cha

pter

One

Introduction

1

Page 10: Public Policy Evaluation

22

INTRODUCTION

This Chapter deals with the following:

• Purpose of this document

• Intended audience

• Definition of monitoring and of evaluation

• Importance of monitoring and evaluation

• Purposes (and uses) of monitoring and evaluation

• Content outline

1.1 The purpose of this document

The purpose of this document is to –

• clarify basic M&E concepts and ideas as they apply in the context of the SA Public Service;

• put the concepts in a framework showing the interrelationships between them;

• contribute to the development of a coherent and dynamic culture of monitoring and evaluation in the Public Service; and

• contribute to a better understanding and enriched debate about the different dimensions of public sector performance.

Hopefully this document will complement and enhance the work being done as part of the Government-Wide Monitoring and Evaluation System, particularly the Framework for Managing Programme Performance Information (published by the National Treasury in 2007) and the Policy Framework for the Government-Wide Monitoring and Evaluation System (published by the Policy Coordination and Advisory Services in the Presidency, in 2007).

1.2 Intended audience

This document is intended for use by –

• M&E practitioners;

• Senior Management in the Public Service; and

• managers of service delivery units, who produce performance information and statistics, and who are also required from time to time to reflect on and evaluate the success of their work.

Page 11: Public Policy Evaluation

3

1.3 Definition of Monitoring and Evaluation

Monitoring and evaluation have been defined as:

Monitoring

“A continuing function that uses systematic collection of data on specified indicators to provide management and the main stakeholders of an ongoing development intervention with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds”1.

Evaluation

“The systematic and objective assessment of an on-going or completed project, programme or policy, its design, implementation and results. The aim is to determine the relevance and fulfillment of objectives, development efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors.

Evaluation also refers to the process of determining the worth or significance of an activity, policy or programme. An assessment, as systematic and objective as possible, of a planned, on-going, or completed development intervention.

Note: Evaluation in some instances involves the definition of appropriate standards, the examination of performance against those standards, an assessment of actual and expected results and the identification of relevant lessons”2.

The above definitions are widely used by the development assistance community. The definitions proposed by the Policy Framework for the Government-wide Monitoring and Evaluation System3 also broadly accord with the above definitions.

Evaluation is the determination of merit or shortcoming. To make the judgement one needs a standard of what is regarded as meritorious to compare with. Evaluation is thus a process of comparison to a standard. For instance, the statement “a high quality service has been delivered that met the needs of clients and improved their circumstances” is an evaluation. The evaluation will be better if “quality”, “needs” and “improvement in circumstances” have been quantified.

The emphasis in monitoring is on checking progress towards the achievement of an objective. A good monitoring system will thus give warning, early on in the implementation of a course of action, that the end goal will be reached as planned. Monitoring also involves a process of comparison because actual performance is compared with what was planned or expected. A simple example is the monitoring of the completion of the planned activities of a project

1 Organisation for Economic Cooperation and Development (OECD). Glossary of Key Terms in Evaluation and Results Based Management. 2002.

2 Ibid.3 Presidency (Policy Coordination and Advisory Services). 2007.

Page 12: Public Policy Evaluation

4

against the target dates that have been set for each activity. Another example for routine activities like the processing of applications for social grants, is to monitor the number of applications received against the number completed per month. If 100 are received but only 90 completed and if this trend is repeated for a number of months, it means that a backlog of unprocessed applications is building up.

1.4 Importance of Monitoring and Evaluation

Governments are increasingly being called upon to demonstrate results. It is expected of them to demonstrate that they are making a real difference to the lives of their people and that value for money has been delivered. Citizens are no longer solely interested in the administration of laws but also in the services that are rendered. Critically, they are more than ever interested in outcomes, like the performance of the economy in creating jobs.

Similarly, the South African Government recognised that, to ensure that tangible results are achieved, the way that it monitors, evaluates and reports on its policies, projects and programmes, is crucial.

In his 2004 State of the Nation address the President emphasised the importance of monitoring, evaluation and reporting in government:

“The government is also in the process of refining our system of Monitoring and Evaluation, to improve the performance of our system of governance and the quality of our outputs, providing an early warning system and a mechanism to respond speedily to problems, as they arise. Among other things, this will necessitate an improvement of our statistical and information base and enhancing the capacity of the Policy Coordination and Advisory Services unit.”

The President’s statement expresses government’s commitment to carry out an obligation arising from the People’s Contract. Since then there has been an increased focus on M&E in South Africa. Several departments are putting in place better capacity for M&E or are developing M&E systems. The proposed Government-wide M&E System also emphasises the importance of monitoring and evaluation.

1.5 Purposes (and uses) of Monitoring and Evaluation

Monitoring and evaluation is used for a variety of purposes. The purpose for which it is used determines the particular orientation of each evaluation. M&E may be used for the following main purposes:

i) Management decision-making

M&E systems augment managerial processes and provide evidence for decision-making. The question that should be asked is whether the quality of the M&E information provided is appropriate and how well it feeds into existing managerial processes. M&E can never replace good management practices; rather it augments and complements management.

Page 13: Public Policy Evaluation

5

Some examples of M&E used in this context are decisions on resource allocation, choices between competing strategies to achieve the same objective, policy decisions, and decisions on programme design and implementation. The accuracy of information and the manner in which it is presented become critical for supporting management in their decision-making processes.

ii) Organisational learning

This is the most challenging outcome for M&E, as it presupposes that M&E results and findings help to create learning organisations. However, translating findings into “learnings” challenges even the most sophisticated of organisations.

M&E is also a research tool to explore what programme design, or solution to societal problems, will work best and why, and what programme design and operational processes will create the best value for money. M&E should provide the analysis and evidence to do the trade-offs between various alternative strategies. The information gathered should be translated into analytical, action-oriented reports that facilitate effective decision-making. The focus here is on causes of problems rather than the manifestation of problems. Learning has been described as “a continuous dynamic process of investigation where the key elements are experience, knowledge, access and relevance. It requires a culture of inquiry and investigation, rather than one of response and reporting”4. M&E produces new knowledge. “Knowledge management means capturing findings, institutionalizing learning, and organizing the wealth of information produced continually by the M&E system”5.

iii) Accountability

Public officials have a constitutional obligation to account to Parliament. They should be broadly accountable for how they spend public money, how they have achieved the purposes for which the money has been voted and that they have gone about their duties with a high degree of integrity.

M&E provides the information, in a structured and formalised manner, which allows scrutiny of public service activities at all levels.

This purpose of M&E may account for the perception that M&E is “policing”. Despite the concerns that many have that one should not pursue M&E only for the purpose of accountability, as it may create suspicion and a culture of fear, when dealing with public funds accountability is critically important. Accountability is governed by the Constitution and legislation such as the Public Finance Management Act, is supported by institutions such as the Auditor-General and the Public Service Commission, and failure to adhere to meeting accountability requirements is often met by sanction.

4 Kusek, J.Z and Rist, RC. 2004. Ten Steps to a Results-Based Monitoring and Evaluation System. Washington, DC: The World Bank, p. 140.

5 Kusek and Rist, p. 143.

Page 14: Public Policy Evaluation

6

Apart from the above main purposes of M&E, its findings are also used, across a broad audience, for the following6:

i) Soliciting support for programmes

If the success of a programme can be demonstrated by means of evaluation findings it is easier to garner support for the programme, for example continued or increased budgetary allocations for the programme or political support when important policy decisions affecting the programme must be made.

ii) Supporting advocacy

M&E results from projects and programmes generally help to make an argument for the continuation, adjustment or termination of a programme. M&E in this context provides the means for supporting or refuting arguments, clarifying issues, promoting understanding of the aims and underlying logic of policies, documenting programme implementation and thereby creating an institutional memory, and involving more people in the design and execution of the programme. Through this it plays a vital advocacy role.

iii) Promoting transparency

One of the most persuasive uses for M&E, if its findings are made available to a broader audience, is that it promotes transparency, and through this facilitates decision-making and accountability. M&E requires a willingness to be subjected to scrutiny, as findings may be published and made available to the public.

1.6 Content outline

The chapters of this text are divided as follows:

Chapter 2 explains the context of M&E in the South African Public Service.

Chapter 3 introduces the idea of evaluation perspectives. Evaluation perspectives point to the main focuses of an evaluation.

Chapter 4 emphasises the value basis of monitoring and evaluation and defines a range of values.

Chapter 5 introduces programme evaluation and discusses a few frameworks that can be used for programme evaluation.

Chapter 6 applies the concepts discussed in the foregoing chapters.

6 Kusek and Rist, p 130.

Page 15: Public Policy Evaluation

Cha

pter

Tw

o

The Context of Monitoring and Evaluation

7

Page 16: Public Policy Evaluation

88

This Chapter deals with the following:

• The context of the developmental state

• Location of M&E in the policy process

• Location of M&E in the planning process

• The Government-wide M&E System

• Important institutions with a role in M&E

2.1 The context of the developmental state

When state institutions or government programmes are evaluated, such evaluation should take cognisance of the type of state these institutions and programmes are located in. The Constitution envisages that the state should be a developmental state: Section 195(1)(c) provides that “Public administration must be development-oriented.” So, state institutions or government programmes should be designed in such a manner that they comply with this principle.

The PSC expressed its views with regard to the context of the developmental state in its 2007 State of the Public Service Report:

“South Africa’s efforts to promote growth and development are being pursued within the context of building a developmental state. Without going into a detailed discussion on the different conceptions of a developmental state, it suffices to say that such a state seeks to ‘capably intervene and shepherd societal resources to achieve national developmental objectives,’ rather than simply rely on the forces of the market.

What gives rise to and shapes the nature of a developmental state depends on the context and history of a country. ... Against this background, many have quite correctly cautioned against any attempts to suggest that there is a prototype of a developmental state that can be constructed on the basis of what worked in other countries.

What then is the specific context within which to locate a South African developmental state? The PSC believes that the Constitution provides the basis on which to understand developmentalism in South Africa given how it captures the collective will and determination of her people to create a better life for themselves”7.

Monitoring and evaluation should be practised in this same context.

2.2 Location of M&E in the policy process

It is important to understand where M&E fits in the policy-making and implementation cycle. A generic policy life cycle is illustrated in Figure 18. Since there are not many completely new problems that the state has never addressed before, the cycle probably starts with the

7 Public Service Commission. 2007. State of the Public Service Report. Page 9.8 Cloete, F. 2006. Policy Monitoring and Evaluation. Unpublished Power Point slide.

Page 17: Public Policy Evaluation

9

review of existing policy. The stages of problem identification, determining policy objectives, examining policy options, and taking a policy decision are a complex process filtered through many layers of stakeholders. These stakeholders include political parties, civil society, legislative and executive arms of government, and government departments. Policy is further argued and explained in various documents, like discussion and policy documents.

The process is invariably not as sequential or rational as depicted. Identification of options and rational evaluation of the feasibility, or the costs and benefits, of options, in any precise sense, assume perfect knowledge of what will work, which is frequently not the case. Policy options emerge through political debate and the best policies through taking a considered decision and making adjustments when the effect of a policy is seen in practice.

As soon as a policy decision has been taken, government departments initiate the processes of designing a programme that can achieve the policy objectives, detailed planning of the programme and implementation. To ensure that implementation proceed as planned and that the envisaged objectives are achieved, the programme is monitored and evaluated. Depending on the results achieved by the programme, the initial policy decision, or aspects of the design, implementation and resource allocation to the programme, may be reviewed.

Figure 1. The policy life cycle

From the depiction of the cycle in Figure 1 it can be seen that the evaluation of the success of policy and the reasons for success or failure, are critical parts of the process. This evaluation is not necessarily a formal, technical evaluation but one that is intricately part of administrative

Figure 1: The Policy Life Cycle

Problem

Policy Objectives

Policy Options

Feasibility of options

PolicyDecision

Planning

Implementation

Monitoring

Evaluation

Review

Page 18: Public Policy Evaluation

10

and political processes where the judgements and power of key decision-makers play the primary role. M&E mediates this by producing valid evidence for policy decisions, thereby ensuring greater objectivity. Since public policy is a set of statements that “determine what actions government will take, what effects those actions will have on social conditions, and how those actions can be altered if they produce undesirable outcomes”9, policy evaluation is also an inherent part of M&E.

2.3 Location of M&E in the planning process

Having located M&E in the policy cycle, it is also necessary to explain where it fits into the more formal planning and implementation processes of government departments. This process is illustrated by Figure 210.

Figure 2. Planning and Review Cycle

In the national sphere of government each department must produce a five year strategic plan, which is aligned with government strategic direction as expressed in the Medium Term Strategic Framework and the Government Programme of Action. The process starts with each new electoral cycle when a new government produces a new programme of action. The same happens at provincial level where strategic plans must be aligned with the provincial government programme of action, but also to national plans.

9 Fox, W, Schwella, E and Wissink, H. Public Management. Juta, Kenwyn. 1991. p30.10 Ibid.

Figure 2: Planning and Review Cycle

1. Preparation of Performance Plans

4. Review

2. Monitoring

5 YearStrategicPlan

Medium TermBudget

AnnualPerformancePlan

Performanceplans forunits andindividuals

Monthly andquarterlyreports

ThirdQuarterReport

SpeciallyCommissionedEvaluations

PerformanceInformationData Base

Annualreview

Page 19: Public Policy Evaluation

11

This is especially true for concurrent functions where the national government produces plans for the whole sector that guide planning and implementation at the provincial level. At a provincial level, departmental strategic plans should also be aligned with the Provincial Growth and Development Strategies11. Plans must further be aligned with local Integrated Development Plans12. Ideally, formal plans produced by government departments must be aligned across all spheres of government13.

Based on its strategic plan each department prepares its budget (called Estimates of Expenditure/ Medium Term Expenditure Framework14), which is submitted to the treasury and eventually approved by Parliament or the provincial legislature. When approved by Parliament or the provincial legislature the budget becomes law (the Appropriation Act for the year) and is a department’s mandate to spend money on the purposes for which it was voted.

Based on its strategic plan and the medium term budget, a department must also prepare an annual performance plan. These plans – the strategic plan, the budget and the annual performance plan – contain objectives, outputs, indicators and targets. A department’s annual performance plan is broken down to plans for each component in the organisation. These plans are then implemented and monitoring should start immediately. The monitoring measures progress against the objectives, outputs, indicators and targets in the plans and takes the form of monthly and quarterly reports.

Managers supplement quarterly monitoring with evaluation of success, analysis of the reasons for success or shortcoming, and action plans for improving performance. Managers’ own evaluations can further be supplemented by specially commissioned evaluations by experts. This process culminates in an annual review of performance, which feeds into a new planning cycle for a following financial year.

2.4 The Government-wide M&E System

Cabinet has mandated the Governance and Administration Cluster of the Forum of South Africa’s Directors-General to construct an overarching Government-wide Monitoring and Evaluation System.

The system is envisaged to function in two ways15:

• “It should provide an integrated, encompassing framework of M&E principles, practices and standards to be used throughout Government; and

11 A comprehensive development plan for the province, showing how various stakeholders, including government on national and provincial level, and the private sector, will contribute to the development of the province.

12 A comprehensive development plan for the local area, showing how various stakeholders, including the local government, the private sector and government departments in other spheres of government, will contribute to the development of the local area.

13 See the background section (Linking Strategic Planning to the Electoral Cycle) in: National Treasury. Framework and Templates for provincial departments for the preparation of Strategic and Performance Plans for 2005-2010, and Annual Performance Plans for the 2005 financial year. 16 August 2004.

14 In South Africa the published budget contains estimates for three years, or medium term.15 Presidency. 2007. Policy Framework for the Government-wide Monitoring and Evaluation System. Page 5.

Page 20: Public Policy Evaluation

12

• it should also function as an apex-level information system which draws from the component systems in the framework to deliver useful M&E products for its users. It is therefore a derived system that extract information from other systems throughout all spheres of government. It will therefore be very reliant on a minimum level of standardisation throughout government, as well as the quality of information from those systems”.

The GWM&E system will produce the following outputs16:

• “Improved quality of performance information and analysis at programme level within departments and municipalities (inputs, outputs and outcomes).

• Improved monitoring and evaluation of outcomes and impact across the whole of government through, eg Government Programme of Action bi-monthly Report, Annual Country Progress Report based on the national indicators, etc.

• Sectoral and thematic evaluation reports.

• Improved monitoring and evaluation of provincial outcomes and impact in relation to Provincial Growth and Development Plans.

• Projects to improve M&E performance in selected institutions across government.

• Capacity building initiatives to build capacity for M&E and foster a culture of governance and decision-making which responds to M&E findings”.

Government draws from three data terrains for M&E purposes, each of which will be subject to a dedicated framework describing what is required for them to be fully functional. The three terrains are depicted in Figure 317.

16 Presidency. 2007. Policy Framework for the Government-wide Monitoring and Evaluation System. Page 7.17 Presidency. 2007. Policy Framework for the Government-wide Monitoring and Evaluation System. Page 7.

Figure 3: Policy Frameworks of the Government-wide Monitoring and Evaluating System

Evaluations

Programme PerformanceInformation

Social, Economic andDemographic

Statistics

Statistics andSurveys Framework

Framework for ManagingProgramme Performance

Information

Government-wideMonitoring and Evaluation

Framework

EvaluationsFramework

Page 21: Public Policy Evaluation

13

Figure 3: Policy Frameworks of the Government-wide Monitoring and Evaluation System

Two of these frameworks have already been issued, namely the Framework for Managing Programme Performance Information, issued by the National Treasury in 2007, and the South African Statistical Quality Assessment Framework (SASQAF) (First edition) issued by Statistics South Africa, also in 2007.

As currently conceptualised, the Government-wide System relies on systems in departments in all spheres of government to provide the information from which the performance of the whole of government can be judged. The Policy Coordination and Advisory Services in the Presidency have already produced important whole of government performance reports, for example, the Towards a Ten Year Review in 200318, and the Development Indicators Mid-term Review in 200719, based on data supplied by departments, that is, from systems that feed into the Government-wide system.

Work done by the National Treasury to improve the quality and also standardise performance indicators, has already had an impact on the quality of performance information. The efforts of the National Treasury were complemented by the Auditor-General who, from the 2005/06 financial year, started with a process of auditing the quality of performance indicators, which will eventually lead to the auditing of the quality of the performance information itself.

2.5 Important institutions with a role in M&E

A document of this nature cannot exhaustively cover the complex “machinery of government”20, but since many institutions have a role to play in M&E and since these roles often overlap, a short explanation of the roles of key institutions is given here. This short section only gives a few signposts, and can therefore not be complete in the sense of covering all the institutions that have an M&E role. Specifically, institutions on the local government level are ignored.

Explaining the roles of these institutions may be made easier by grouping them into the following categories:

• Departments at the centre of government21 on the national level.

• Departments at the centre of government on the provincial level.

• Line departments.

• Constitutional institutions.

18 The Presidency, Policy Coordination and Advisory Services (PCAS). Towards a Ten Year Review: Synthesis report on implementa-tion of government programmes. October 2003.

19 The Presidency. Development Indicators Mid-Term Review. 2007.20 A quick reference to the machinery of government is: DPSA. The Machinery of Government: Structure and Functions of Govern-

ment. May 2003.21 The phrase “at the centre of government” is used here to denote departments which regulate, give direction to or render services

to other departments (rather than the public).

Page 22: Public Policy Evaluation

14

2.5.1 Departments at the centre of government on the national level

The Presidency

The Presidency supports the President to give central direction to government policy. In this regard the Presidency is a key role player in compiling the Medium Term Strategic Framework and the Government Programme of Action. The Presidency then monitors the implementation of key government priorities. Specifically, the Presidency compiles bi-annual progress reports on the implementation of Government’s Programme of Action. It also monitors the performance of South Africa against key development indicators. The Presidency was, for example, responsible for the Ten Year Review22 and the Development Indicators Mid-term Review23. For these reports the Presidency is dependent on data that it draws from several government departments and it is therefore essential that the M&E systems in these departments can absolutely be relied upon.

National Treasury

The National Treasury supports the Minister of Finance to determine fiscal policy. As such it must closely monitor a range of economic indicators. The National Treasury further compiles the national budget and develops and implements financial management policy. Since money is, through the budget, allocated by Parliament to achieve specific strategic objectives, and indicators and targets are set in the Estimates of Expenditure to measure the attainment of those objectives, the National Treasury plays a key role to monitor performance against objectives. This is done by means of quarterly reports that must be submitted to the National Treasury. The National Treasury also evaluates whether the expenditure has achieved value for money. These evaluations are published in key documents like the Budget Review24, the Provincial Budgets and Expenditure Review25 and the Local Government Budgets and Expenditure Review26.

Department of Public Service and Administration (DPSA)

The DPSA is responsible for the macro-organisation of the Public Service and for the development of policy on the functioning of the Public Service. It is further responsible for Human Resource Management policy, the determination of conditions of service for the Public Service, the development of policy, regulations, norms and standards for Information Management and the use of Information Technology in the Public Service, and generally for the promotion of a Public Service that conforms to all the values governing public administration listed in Section 195 of the Constitution. As such the DPSA must monitor and evaluate the performance of the Public Service, especially from the perspective of sound Human Resource Management.

22 The Presidency, Policy Coordination and Advisory Services (PCAS). Towards a Ten Year Review: Synthesis report on implementa-tion of government programmes. October 2003.

23 The Presidency. Development Indicators Mid-Term Review. 2007.24 National Treasury. Budget Review 2007. 21 February 2007.25 National Treasury. Provincial Budgets and Expenditure Review 2003/04 – 2009/10. September 2007.26 National Treasury. Local Government Budgets and Expenditure Review 2001/02 – 2007/08. October 2006.

Page 23: Public Policy Evaluation

15

Department of Provincial and Local Government (DPLG)

The DPLG develops policy on the structure and functioning of provincial and local government and as such monitors and evaluates the performance of provincial and local government. Since local government is a critical institution for the delivery of basic services, the DPLG’s role in monitoring, and evaluating, the financial health and the service delivery of local government, is a very important role.

Statistics South Africa (Stats SA)

Stats SA manages the national statistics system that collects, analyses and publishes a range of demographic, social and economic statistics. It also collects statistics on a set of key development indicators. Without a sound base of reliable statistics, planning of government services, and M&E, at a level of sophistication that is required of government, will not have been possible.

South African Management Development Institute (SAMDI)

SAMDI provides or commissions training in M&E for the Public Service.

2.5.2 Departments at the centre of government at the provincial level

At provincial level the key departments with a role in M&E are the offices of the premier and provincial treasuries. From an M&E perspective, key strategic objectives for the province are set in the Provincial Growth and Development Strategy and the Provincial Government Programme of Action. Offices of the premier play a key role in setting strategic direction for the province and monitoring and evaluating the performance of the provincial government departments on the delivery of the growth and development strategy and other provincial priorities.

2.5.3 Line departments

Line departments implement government policy in their specific functional areas. As part of this role they must monitor and evaluate the implementation of policy, the impact of policy, as well as the level and quality of service delivery.

A critical role is played by the national policy departments for concurrent functions because they must develop policy as well as norms and standards for M&E systems that will be used throughout the sector. This includes a standard set of performance indicators for the sector so that performance can be compared across the sector.

2.5.4 Constitutional institutions

Though the functions of the constitutional institutions may often overlap with the above categories of institutions, their role differ in the sense that they do monitoring and evaluation independently from government and don’t report to the Executive, but to Parliament.

Page 24: Public Policy Evaluation

16

They therefore have a special role to play to protect the values and principles of our democracy. Because they are independent, they may arrive at different conclusions about the performance of government and the Public Service, and as such may bring different analyses and insights to bear on public policy.

Public Service Commission (PSC)

The PSC has the constitutionally prescribed function to promote the values and principles governing public administration listed in section 195 of the Constitution, in the Public Service. It must further monitor and evaluate the organisation and administration of the Public Service and can propose measures to improve the performance of the Public Service. It must also provide to Parliament an evaluation of the extent to which the values and principles governing public administration have been complied with in the Public Service. Based on these functions the PSC aims to establish itself as a leader in monitoring and evaluation of the performance of the Public Service. It is in the context of these functions that the PSC is publishing this document.

Auditor-General

The Auditor-General’s role is to audit the accounts and financial statements of national and provincial departments as well as municipalities and any other government institution or accounting entity. These audits include financial audits – to certify that the institution’s financial statements fairly represents the financial position of the institution – and regularity audits – to certify that the institution has complied with all relevant regulations and prescripts. Important from an M&E point of view is that the Auditor-General also does performance audits – to test whether money has been spent economically, efficiently and effectively by the audited entity. The Auditor-General has also started to do audits and express an opinion on the quality of performance indicators that departments publish in their strategic plans and in the Estimates of National Expenditure. The aim is to, in future, also audit performance information, so that the Auditor-General can express an opinion on the veracity of such performance information just as it is expressing opinions on the veracity of financial information. It will be a big forward step for the practice of M&E if secondary users of performance information could have confidence in such information, based on the knowledge that the accuracy of the information has been tested by the Auditor-General.

Human Rights Commission

The Human Rights Commission was formed to promote a culture of human rights, respect for, and protection of these rights as enshrined in the Bill of Rights, and to monitor and evaluate the extent to which human rights are observed in South Africa. Since many of the rights are socio-economic rights, the Human Rights Commission also plays a role in monitoring and evaluating service delivery by government27. In cases where the rights of individuals and communities have been violated, the Commission has the power to secure appropriate corrective action.

27 An important report in this regard is Human Rights Commission. 2006. 6th Economic and Social Rights Report.

Page 25: Public Policy Evaluation

Cha

pter

Thr

ee

Evaluation Perspectives

17

Page 26: Public Policy Evaluation

1818

This Chapter deals with the following:

• The idea of evaluation perspectives

• The Balanced Scorecard of Kaplan and Nortan

• Programme performance perspective

• Financial perspective

• Governance perspective

• Human Resource Management perspective

• Ethics perspective

• The perspectives adopted by National Treasury Guidelines

3.1 The idea of evaluation perspectives

The subject of an evaluation (the topic, the entity to be evaluated) may be the Public Service, a system, policy, programme, several programmes, a service, project, a department or unit within a department, a process or practice. The subject of an evaluation may also be the whole of government or the country. These entities are complex, or multi-dimensional. For analytical purposes, some framework is needed to identify the dimensions that will be focused on.

All these entities are intended to do something or to result into something. Thus, their performance can be evaluated.

Evaluation perspectives point to the main focuses of an evaluation. By simply asking a few questions one can see that the performance of the Public Service can be viewed from many different perspectives, for example:

• How was the money voted by Parliament spent? Was there any wastage? Was value for money obtained?

• Was government’s policy objectives achieved?

• Is the Public Service corrupt?

• Are people treated fairly and courteously in their interaction with the Public Service?

When complex, multi-dimensional subjects are evaluated the outcome will depend largely on the perspective one adopts, or what dimensions of the complex reality are emphasised, or what questions are asked. The type of evaluation and the methodologies used, and consequently the outcomes of the evaluation, depend on the questions asked.

The central idea behind evaluating performance from different perspectives is to use a balanced set of perspectives for the evaluation. For example, evaluation of the achievement of policy outcomes (programme performance perspective) may be balanced by evaluation

Page 27: Public Policy Evaluation

19

from a financial perspective (to evaluate for instance whether value for money has been obtained). An organisation may be very efficient in delivering a specific output (business process perspective), but not in adapting to changes in the policy environment or the specific needs of citizens (learning and growth perspective). During particular stages of implementation of a programme, different perspectives may be emphasised. When a programme is still in the design phase or when it comes up for policy review, policy analysis and review are appropriate. Outside periods of policy review, issues of implementation are more appropriate, to make sure that programmes do not fail because of poor implementation.

Evaluation of the performance of the Public Service from a particular perspective employs analytical frameworks, models, theories and methodologies unique to the particular perspective. For instance, financial analysis makes use of financial information prepared according to specific accounting practice and the information is analysed using accepted methods and models. Evaluation from a human resource management perspective involves specific frameworks regarding the objectives of human resource management and the practices employed to achieve those objectives. Similarly, the evaluation of service delivery or programme performance also employs specific frameworks. Some frameworks for the evaluation of programme performance are introduced in Chapter 5.

This document does not advocate a specific set of perspectives, but in this section examples of possible perspectives are discussed to illustrate how perspectives can aid in framing an evaluation (deciding what to evaluate, the purpose of the evaluation, and the main evaluation questions). Since the idea of using different perspectives can be credited to Kaplan and Nortan28, their four perspectives are discussed first (as illustration, not prescription). Important perspectives in the context of the Public Service are discussed next. Lastly, we turn to perspectives prescribed/ suggested by National Treasury guidelines to show that the idea of perspectives has already found its way into Public Service prescripts.

3.2 The Balanced Scorecard of Kaplan and Nortan

Kaplan and Nortan have proposed the following four perspectives for evaluating the performance of an organisation (though their focus was on the private sector):

• “Financial Perspective: To succeed financially, how should we appear to our shareholders?

• Customer Perspective: To achieve our vision, how should we appear to our customers?

• Learning and Growth Perspective: To achieve our vision, how will we sustain our ability to change and improve, and adapt to changes in the environment and new challenges?

• Internal Business Process Perspective: To satisfy our shareholders and customers, what business processes must we excel at? What are the unique competencies the organisation should have?”

28 Kaplan, RS and Nortan, DP, 1996. The Balanced Scorecard. Harvard Business School Press, Boston, Massachusetts.

Page 28: Public Policy Evaluation

20

A business may not get early warning about threats to its sustainability if it is only evaluated from a (especially short term) financial perspective. Realising this, Kaplan and Norton developed the alternative perspectives from which the performance of a business may be evaluated, to give a more balanced view of such performance.

3.3 Programme performance perspective

A programme is a set of government activities that deliver the products of government. These products are complex outcomes and include governance, justice, safety and security, development impetus, social change and services. Evaluation from a programme performance perspective will try to answer questions such as whether government objectives have been achieved and whether it could have been achieved better by designing the programme differently or implementing it better.

Effective monitoring and evaluation of government programmes requires careful analysis of the key factors that are relevant to the successful delivery of the programme, and of how these relate to each other. Different approaches are available to facilitate such analyses. Evaluating programme performance is discussed in Chapter 5.

3.4 Financial perspective

Monitoring and evaluation from a financial perspective happens through monthly and annual financial statements. Financial statements try to answer the following questions: Was money spent as appropriated, has the income that accrued to government been collected, were assets protected, can the department meet its liabilities and has the department adhered to sound financial controls?

These financial reports currently primarily give managers updates on progress with expenditure as measured against budget. Such statements are prepared in terms of Generally Recognised Accounting Practice as prescribed in terms of the PFMA. Annual financial statements are also audited by the Auditor-General so that a high degree of confidence could be attached to financial figures.

Since financial accounting answers very basic questions some departments are trying to introduce management accounting with tasks of analysing and interpreting financial information, costing services, advising managers on the financial implications of strategic decisions, advising on choosing between alternative strategies, and directing attention to and helping managers to solve problems. So, as with other types of M&E, the process begins with monitoring and answering basic, pre-set questions and as more and more questions are asked, the more penetrating the evaluation becomes.

Page 29: Public Policy Evaluation

21

3.5 Governance perspective

The PSC views good governance as compliance with all the values listed in section 195 of the Constitution29.

It has been argued, for instance by Cloete30, that “A coherent good governance measurement programme should be developed as a matter of urgency as an integral part of a more encompassing M&E programme in South Africa”.

“Governance” has been conceptualised by Olowu and Sako31 as “a system of values, policies and institutions by which a society manages its economic, political and social affairs through interaction within and among the state, civil society and private sector”.

Government’s Ten Year Review32 used indicators grouped under the following categories to measure governance:

• Voice and accountability.

• Political instability and violence.

• Government effectiveness.

• Regulatory quality.

• Rule of law.

• Ethics.

It is clear that good governance is a specific perspective and that the monitoring and evaluation of the performance of South Africa, the Government or the Public Service from this perspective requires unique approaches, methodologies, indicators and data sources.

3.6 Human Resource Management (HRM) perspective

Similar to all the other perspectives discussed in this section, the Human Resource Management perspective also requires the application of unique approaches to M&E.

An evaluation from a HRM perspective requires evaluation of both whether –

• HRM objectives have been achieved; and

• good human resource management practice is applied in the Public Service.

29 See State of the Public Service Report for several years.30 Cloete, F. Measuring good governance in South Africa. November 2005. (Unpublished article.)31 Olowu, D and Sako, S. Better Governance and public Policy: Capacity Building and Democratic Renewal in Africa. Kumarian Press,

Bloomfield, USA. 2002. As quoted by Cloete, F. Measuring good governance in South Africa. November 2005.32 The Presidency, Policy Coordination and Advisory Services (PCAS). Towards a Ten Year Review: Synthesis report on implementa-

tion of government programmes. October 2003.

Page 30: Public Policy Evaluation

22

HRM objectives include, as examples:

• The recruitment of enough skilled staff to meet service delivery requirements.

• Achieving a status of being a good employer.

• A representative Public Service.

• The creation of “a Public Service that meets the highest professional standards, that is proud of the fact that it exists to serve the people, that is patriotic and selfless, that fully understands the historic significance of the esteemed position it occupies as one of the principal architects of a non-racial, non-sexist, prosperous and egalitarian South Africa”33.

The need to evaluate whether good HRM practice is applied in the Public Service is embodied in the following constitutional principles:

Good human resource management and career development practices, to maximise human potential, must be cultivated.

...employment and personnel management practices (must be ) based on ability, objectivity, fairness...34

3.7 Ethics perspective

Evaluation from an ethics perspective will require, on the one hand, an evaluation of certain ethics outcomes, such as an actual change in the conduct of public servants to better comply with the Code of Conduct for Public Servants35 or a lower incidence of corruption, and, on the other hand, an evaluation of whether adequate measures have been put in place to ensure such outcomes. These measures have been called an ethics infrastructure36 and include:

• Anti-corruption strategies and fraud prevention plans.

• Risk assessment.

• Activities to promote the Code of Conduct.

• Minimum anti-corruption capacity.

• Investigation procedures and protocols.

• Effective reporting lines (whistle blowing).

• Inter-agency cooperation.

33 Letter from the President: Building a Developmental Public Service, published in ANC Today, Volume 7, No 19, 18-24 May 2007.

34 Section 195(1)(h) and (i).35 Public Service Regulations, Chapter 2.36 Public Service Commission. Ethics Survey. 2001 and Assessment of Professional Ethics in the Free State. March 2007.

Page 31: Public Policy Evaluation

23

• Management of conflicts of interest.

• Dealing with financial misconduct.

• Assignment of responsibility for the ethics function in the organisation.

• Pre-employment screening.

• Ethics training.

3.8 The perspectives adopted by National Treasury Guidelines

In its Framework and Templates for provincial departments for the preparation of Strategic and Performance Plans37 the National Treasury proposes that departments should describe strategic goals for each of the following areas (which can be viewed as perspectives):

• Service delivery

• Management/organisation

• Financial management

• Training and learning

By implication a set of perspectives was also chosen when the framework for annual reports was prescribed for the South African Public Service, because the framework requires of departments to report on their performance under specific headings. The annual report is an important accountability instrument and it contains performance information. When thinking about the content of the Annual Report, the National Treasury38 had to decide what perspectives (which subsequently became chapters in the report) to include.

If a standard set of perspectives were adopted for the South African Public Service, it would make sense to amend the standard chapters of the annual report to make provision for all the perspectives of the framework that may be agreed upon. Currently the Treasury Guidelines39 make provision for the following chapters of the annual report:

• General Information

• Programme (or service delivery) Performance

• Report of the Audit Committee

37 National Treasury. Framework and Templates for provincial departments for the preparation of Strategic and Performance Plans for 2005-2010, and Annual Performance Plans for the 2005 financial year. 16 August 2004.

38 Guide for the Preparation of Annual Reports (for various financial years). The Guide was developed in collaboration with the DPSA and also took into consideration the recommendations of the Public Service Commission in its report Evaluation of depart-ments’ annual reports as an accountability mechanism. 1999.

39 National Treasury. Guide for the Preparation of Annual Reports for the year ended 31 March 2007.

Page 32: Public Policy Evaluation

24

• Annual Financial Statements

• Human Resource Management

The annual report should be a summary of M&E information available in the department. Underlying the information in the annual report should be proper M&E systems and evaluations of the department as an institution and of the programmes it offers. The annual report should focus on performance, and to give a balanced view, should include different perspectives and should anticipate key questions that the department’s stakeholders may have.

Page 33: Public Policy Evaluation

Cha

pter

Fou

r

Values

25

Page 34: Public Policy Evaluation

2626

This Chapter deals with the following:

• The value basis of monitoring and evaluation

• Deriving standards of performance from values

• The values and principles governing public administration

• The eight principles of Batho Pele

• Other concepts/principles expressing some dimension of public service performance

4.1 The value basis of monitoring and evaluation

Since evaluation is the determination of merit or shortcoming, a standard of good performance, or merit, with which to compare, needs to be defined. The concepts explained in this chapter all serve to enrich the idea of performance of government departments or programmes. They help to define what performance is. Since, as has been emphasised, government services are multi-dimensional, many concepts are used to express important aspects of institutional and programme performance.

Values help to define what is regarded as a good standard of public administration or a good standard of performance. Values include the concepts of effectiveness, efficiency, responsiveness to needs and development orientation. In fact, these are not simply concepts but values and principles that must be adhered to. Section 195 (1) of the South African Constitution states that “public administration must be governed by the democratic values and principles enshrined in the Constitution” and then lists nine values.

These concepts are related to the logical framework discussed in Chapter 5 because many of them can be explained in terms of the components of the logical framework. For instance, efficiency is the relationship between outputs and inputs, and effectiveness the relationship between outputs and outcomes.

The values provide additional perspectives from which public administration may be evaluated. For example, the principle of responsiveness to needs prompts one to evaluate performance from the perspective of the needs of clients, or the principle of development orientation requires that the fundamental nature of the Public Service as an instrument for development should be evaluated.

In South Africa the value basis of public administration, and consequently of monitoring and evaluation, has been laid down in authoritative documents – in the first instance in the Constitution, but also in government policy documents. The values discussed in this chapter have consequently been taken from the Constitution and the Policy on the Transformation of Public Service Delivery (the Batho Pele policy). Other values are, however, also frequently used in evaluation. Some of these have been gleaned from the monitoring and evaluation literature and are also included in this chapter.

Page 35: Public Policy Evaluation

27

4.2 Deriving standards of performance from values

Before the values can be used to measure performance they need to be stated in measurable terms. Since the values are rich concepts they have many dimensions. Practically it is only possible to measure a few dimensions that say something important about whether the value is complied with. Compliance with the values can be measured by means of indicators. An indicator is either a measure of performance along a specified dimension of the value or a normative statement that expresses some aspect of the value that must be complied with.

Another way to explain the measurement of compliance with values is to say that several criteria can be applied to measure compliance with a value and for each criterion a specific standard needs to be defined. This process of deriving standards from values can be illustrated by the examples in Table 1. The examples are taken from the Public Service Commission’s Monitoring and Evaluation System40.

Table 1: Deriving indicators and standards from values

Value Criteria/ Indicators StandardsEfficient, economic and effective use of resources must be promoted.

1. Expenditure is according to budget.

2. Programme outputs are clearly defined and there is credible evidence that they have been achieved.

1. Expenditure is as budgeted and material variances are explained.

2. More than half of each programme’s service delivery indicators are measurable in terms of quantity, quality and time dimensions.

3. Outputs, service delivery indica-tors and targets are clearly linked with each other as they appear in the strategic plan, estimates of expenditure and the annual report for the year under review.

4. Programmes are implemented as planned or changes to implemen-tation are reasonably explained.

5. A system to monitor and evaluate programmes/projects is operative.

40 Public Service Commission. 2006. Public Service Monitoring and Evaluation System: Research Guide, Assessment Questionnaire and Reporting Template. 10 July 2006.

Page 36: Public Policy Evaluation

28

Value Criteria/ Indicators StandardsPublic administration must be development-oriented.

The department is effectively involved in programmes/ projects that aim to promote development and reduce poverty.

1. Beneficiaries play an active role in the governance, designing and monitoring of projects.

2. A standardised project plan format is used showing:

a) All relevant details including measurable objectives.

b) Time frames (targets).

c) Clear governance arrangements.

d) Detailed financial projections.

e) Review meetings.

f) Considering issues such as gender, the environment and HIV/AIDS.

3. Poverty reduction projects are aligned with local development plans.

4. Organisational learning takes place.

5. Projects are successfully initiated and/or implemented.

4.3 The values and principles governing public administration

A high standard of professional ethics

On an outcome level, this principle relates to compliance with ethical principles as contained in the Code of Conduct for Public Servants, and on a process level, whether ethics management practices or an ethics infrastructure (such as codes of conduct, ethics evaluations, management of conflict of interest, assignment of responsibility for the ethics function, a mechanism for reporting unethical behaviour, ethics training, pre-employment screening, the inclusion of ethical behaviour in personnel performance assessments and risk assessment) have been established in departments.

Page 37: Public Policy Evaluation

29

Efficiency The relationship between inputs and outputs, that is, to deliver more output for the same amount of input or the same output for a decreased amount of input. See the examples of evaluative findings on evaluations of Income Generating Projects, the Expanded Public Works Programme and the Land Reform Programme with regard to their efficiency in Box 1.

Box 1. Examples of evaluative findings with regard to the efficiency of programmes

Income Generating Projects

• In terms of the cost per job, a study estimated a figure of about R16 000 for income generating projects, versus R18 000 for public works and R25 000 upwards for land redistribution.

Expanded Public Works Programme

• On a project level, there are complaints about a higher time, cost, and administrative (reporting and organising) burden placed on implementers. To this extent it would not appear that the EPWP is particularly efficient.

• The cost of the EPWP (that is, the implementation overheads carried by the national Department of Public Works) is R421 per job created. Compared to the average income per beneficiary of R 3 446, this doesn’t appear much.

Land Reform Programme

• The administrative cost of delivering land reform – the cost of the national office and the provincial offices – compared to the amount for land reform grants – the transfers to households – was just more than 30% between 2003/04 and 2005/06. It then increases to over 60% for the next two financial years but decreases to below 10% as the programme is accelerated. This is quite expensive and bearing in mind that this generally only covers the property acquisition and transfer, it is unclear what the actual costs are when post-settlement support is also taken into account.

Economy Procuring inputs at the best price and using it without wastage.

Page 38: Public Policy Evaluation

30

Effectiveness How well the output and outcome objectives of the department or programme are achieved and how well the outputs produce the desired outcomes. Effectiveness also has to do with alternative strategies to produce the same outcome – that is, which of the available alternative strategies will work best and will cost less. See the examples of evaluative findings on evaluations of Income Generating Projects, the Expanded Public Works Programme and the Land Reform Programme with regard to their effectiveness in Box 2.

Box 2. Examples of evaluative findings with regard to the effectiveness of programmes

Income Generating Projects

• In a study commissioned by ... it was noted that the projects lack the financial records that will enable an assessment of the income generated by the projects. Income generating projects appear to frequently fall short of their intentions. A fair share of income generating projects are either perpetually dependent on external support, or fail altogether.

• The advantages enjoyed through the participation in these projects are often quite modest, but none the less poignant. The question then becomes what one’s definition of success should be.

Expanded Public Works Programme

• The programme created 435 300 jobs by 2006/07 against a target of 1 000 000. However, counting difficulties cast quite a bit of doubt on these numbers.

• There are cases where training is not provided at all, is not accredited, does not focus on hard skills, or does not match skills shortages.

• In many cases no additional jobs were created through the introduction of labour-in-tensive methods. In fact it would seem that the nature of the projects (given their small scale) was labour intensive in the first place and no additional jobs were really created.

• Service delivery remains at the heart of the EPWP in the sense that the programme is premised on the idea that necessary goods and services should be delivered, and to good standard. On the whole, the projects did appear to deliver on this objective.

Page 39: Public Policy Evaluation

31

Box 2 (Continued)

Land Redistribution Programme

• The effectiveness of land redistribution in reducing poverty is not known with certainty, mainly because conclusive studies have not been done recently. While these beneficiaries are typically not reporting an ‘income’ or ‘making a profit’, many are clearly in the process of establishing new, sustainable livelihoods. These livelihoods may not seem ‘successful’ when judged by the standard of commercial agriculture, but they have important poverty reduction impacts. There is ample evidence that many beneficiaries do benefit, albeit not necessarily to the extent indicated by project business plans. The gains enjoyed by the projects may well be insufficient to raise the beneficiaries above a given poverty line but they are nonetheless real. A pertinent issue is therefore the definition of success one applies. Certainly it appears that the projects are not successful when judged in commercial terms or against the expectations of the projects’ own business plans. Despite some progress, an enormous gulf remains between the most successful of land reform farmers, and the average commercial farmer, to the extent there is little or no evidence of true integration into the commercial farming sector.

Development orientation

On an outcome level, this principle is interpreted to mean that public administration must take account of poverty and its causes and should seek to address them. In practice this means that the daily activities of public administration should seek to improve citizens’ quality of life, especially those who are disadvantaged and most vulnerable. On a process level, development orientation means the use of participatory, consultative approaches to development interventions; that such interventions should be community based, responsive and demand driven; that these interventions should be integrated with local development plans; that the efforts of various departments should be integrated and that community and societal resources should be harnessed; that high standards of project management are maintained; that such interventions are closely monitored and evaluated; and that mechanisms to facilitate organisational learning are consciously employed41.

41 Public Service Commission. State of the Public Service Report. 2004, 2005 and 2006.

Page 40: Public Policy Evaluation

32

Services must be provided impartially, fairly, equitably and without bias

Impartiality demands that factors such as race, ethnicity, political affiliation and family connections should play no part in the delivery of services. Fairness demands that account be taken of people’s context and their living situations. In the context of the application of the Administrative Justice Act fairness means that fair procedures be applied when decisions are taken, for example, that people affected by a decision should be given an opportunity to make representations and that reasons for the decision must be given. Equity relates to even-handedness and fair play and equal access to services. This might require that certain groups be given preferential access to redress historical inequalities, which creates a dynamic tension with the principle of impartiality. Operating without bias calls for conscientious consideration of all pertinent factors when making decisions and a preparedness to explain the basis on which decisions were made42.

Responsiveness From the perspective of the organisation, this is the ability to anticipate and adapt to changed circumstances, or, from the perspective of citizens, whether people’s (changing) needs are met. It means that people’s unique circumstances in their communities are taken into account in the design of programmes. It implies a demand driven approach to service delivery that really study and respond to the needs of specific people, families and communities, who live in specific localities and circumstances and have unique needs, values, abilities and experiences and have unique problems to contend with.

Participation in policy making

This principle requires that ordinary people be consulted and involved in all phases of government programmes, from design through to implementation and evaluation, so that their needs will be properly articulated and addressed43.

42 PSC. State of the Public Service Report. 2004.43 Ibid.

Page 41: Public Policy Evaluation

33

Public administrationmust be accountable44.

Accountability involves taking responsibility for one’s actions and is a corollary of being given a mandate by the electorate. It means that one has agreed to be held up to public scrutiny so that decisions and the processes used to reach them can be evaluated and reviewed. Accountability also has a more technical dimension relating to the ability to account for resources and its use in achieving the outcomes the money was intended for. Adherence to generally recognised accounting practice45 is one of the most useful tools in this regard.

Transparency must be fostered by providing the public with timely, accessible and accurate information

This principle requires that the public be provided with information they can use to assess government performance and reach their own conclusions. It also requires that the information be provided in an understandable and accessible format46.

Good Human Resource Management and career develop-ment practices, to max-imise human potential, must be cultivated.

This principle can be subdivided into three aspects: The idea of best Human Resource Management Practice; creating a workplace in which staff members have a clear sense of being nurtured and supported; and the central concept of the maximisation of human potential47.

Public administra-tion must be broadly representative of the South African people, with employment and personnel management practices based on ability, objectivity, fair-ness, and the need to redress the imbalances of the past to achieve broad representation.

This principle establishes the criterion of ability, or merit, or performance in employment and advancement in the public service and the objective and fair assessment of such ability, while balancing this with the need for redress of past imbalances and broad representation48.

44 Ibid.45 PFMA, Chapter 11.46 PSC. State of the Public Service Report. 2004.47 Ibid.48 Ibid.

Page 42: Public Policy Evaluation

34

4.4 The Eight Principles of Batho Pele49

These principles give more perspectives from which the Public Service or government service delivery programmes could be evaluated. Some of them overlap with the above constitutional principles.

Consultation Citizens should be consulted about the level and quality of the public services they receive and, wherever possible, should be given a choice about the services that are offered.

Service Standards Citizens should be told what level and quality of public service they will receive so that they are aware of what to expect.

Access All citizens should have equal access to the services to which they are entitled.

Courtesy Citizens should be treated with courtesy and consideration.

Information Citizens should be given full, accurate information about the public services they are entitled to receive.

Openness and Trans-parency

Citizens should be told how national and provincial departments are run, how much they cost and who is in charge.

Redress If the promised standard of service is not delivered, citizens should be offered an apology, a full explanation and a speedy and effective remedy, and when complaints are made, citizens should receive a sympathetic, positive response.

Value for Money Public services should be provided economically and efficiently in order to give citizens the best possible value for money.

49 Department of Public Service and Administration. White Paper on Transforming Public Service Delivery, 1997 (Government Gazette No 18340 of 1 October 1997.)

Page 43: Public Policy Evaluation

35

4.5 Other concepts/principles expressing some dimension of public service performance

Relevance The extent to which the outputs and outcomes address real needs of or problems and conditions faced by citizens and communities, or the extent to which services continue to be wanted by citizens.

Appropriateness Whether the most sensible means and level of effort are employed to achieve the desired outcome. Like effectiveness, the concept relates to the relationship between outputs and outcomes and whether alternative means to achieve the same outcome have been considered. An important idea is also the service delivery model that is used to deliver a particular service or produce a desired outcome.

Sustainability This, from an organisational perspective, means the ability of the organisation to continue to produce quality outputs into the future, and from the perspective of beneficiaries of services, whether the desired outcome will be maintained into the future, that is, that there will be no relapse to the previous bad state of affairs, even though the development assistance might have been completed. See the examples of evaluative findings on evaluations of Income Generating Projects and the Expanded Public Works Programme with regard to their sustainability in Box 3.

Page 44: Public Policy Evaluation

36

Box 3. Examples of evaluative findings with regard to the sustainability of programmes

Income Generating Projects

• A common threat to sustainability is the challenge of finding markets – in particular breaking out beyond small marketing outlets in nearby communities. Key informants indicate that one of the key weaknesses of small enterprises in general is in not identifying good market opportunities in the first place, but rather returning to the same familiar menu of enterprises, i.e. poultry and vegetables.

• Many of the projects find it difficult to service loans.

Expanded Public Works Programme

• Whether beneficiaries are linked to job / enterprise opportunities beyond the programme is an essential objective of the EPWP but remains unmeasured. EPWP sets a very modest 14% target for this but there appears to be little evidence about the extent to which this is being achieved at all.

• Whether the important objective of broader acceptance of the EPWP principles of using labour intensive methods in the delivery of goods and services is being achieved has not been established by any evaluation up to now.

Empowerment The degree to which people are given the means to improve their own circumstances. Also, the degree to which the beneficiaries of government programmes are given the opportunity to influence the design and implementation of the programme and the degree to which they are given choices in government’s service offering or how the programme is implemented.

Acceptance The degree to which citizens are satisfied with the service.

Scale of engagement The magnitude of the initiative relative to the size of the target group or the problem that government is attempting to address. (Scale is a value question because it relates to the allocation of resources to competing demands and the priority government attaches to all the programmes that compete for the same resources.) See the examples of evaluative findings on evaluations of Income Generating Projects and the Land Redistribution Programme with regard to scale of engagement in Box 4.

Page 45: Public Policy Evaluation

37

Box 4. Examples of evaluative findings with regard to the Scale of Engagement of programmes

Income Generating Projects

• The total number of income generating projects (as opposed to micro-enterprises) appears to be quite small, as is the total number of beneficiaries of these projects. The likelihood is that the total number of current IGP beneficiaries is no more than 100 000. This owes more to the institutional constraints, rather than, say, fiscal constraints.

Land Reform Programme

• The report estimates that about 80 000 households benefited from land redistribution since 1995. On the face of it, the scale of engagement of the land redistribution programme is small relative to land demand. Recent work conducted on behalf of government estimates that, to achieve the 30% target in each province, and taking into account the amount of land likely to be transferred through restitution (which in some provinces is sufficient on its own to meet the 30% target) as well as the likely contribution of municipal commonage projects, there will need to be about 24 000 additional LRAD projects. Assuming the typical profile of LRAD projects does not change, this implies about 210 000 additional beneficiary households. As for what are government’s precise targets regarding redistribution, there is almost nothing beyond the 30% target relating to land, i.e. in terms of how many people could and should benefit.

Targeting The success with which the intervention is directed at those who need it most or for whom it is meant. See the examples of findings of an evaluation of Income Generating Projects with regard to their targeting in Box 5.

Box 5. Examples of evaluative findings with regard to the targeting of programmes

Income Generating Projects

• To the extent many IGPs are not initiatives of government departments or their agents at all, but rather are spontaneously organised and thereafter attract or seek government support, they are not actively targeted at all, nor is there much evidence of prioritising or purposive screening.

• It would appear that despite a lack of much deliberate targeting of IGPs, the significant strength of this programme type is that it does tend to reach the poor more consistently than the non-project based SMME support initiatives.

Page 46: Public Policy Evaluation

Cha

pter

Fiv

e

Evaluating Programme Performance

38

Page 47: Public Policy Evaluation

39

This Chapter deals with the following:

• Programme evaluation

• Logic models

• Results-Based Management

• Theory-based evaluation

5.1 Programme evaluation

A programme is a set of government activities that deliver the products of government. These products are complex outputs and outcomes and include governance, justice, safety and security, development impetus, social change and services.

Since government services are delivered through programmes, service delivery performance can be viewed as a subcomponent of programme performance.

A programme evaluation can be defined as the evaluation of the success of a programme and how the design and implementation of the programme contributed to that success.

A programme evaluation can include an impact evaluation. An impact evaluation has been defined as:

“Impact evaluation is the systematic identification of the effects – positive or negative, intended or not – on individual households, institutions, and the environment caused by a given development activity such as a programme or project. Impact evaluation helps us better understand the extent to which activities reach the poor and the magnitude of their effects on people’s welfare. Impact evaluations can range from large scale sample surveys in which project populations and control groups are compared before and after, and possibly at several points during programme intervention; to small-scale rapid assessment and participatory appraisals where estimates of impact are obtained from combining group interviews, key informants, case studies and available secondary data”50.

Effective evaluation of government programmes requires careful analysis of the key factors that are relevant to the successful delivery of the programme, and of how these relate to each other. Key elements of government programmes are listed in Box 6. The list of elements can be used as an analytical checklist. A simple evaluation design will only ask whether the programme has been implemented as intended and whether the pre-set objectives have been achieved. Rigorous impact evaluations or policy analyses on the other hand, will review all the policy issues or programme design elements from the perspective of what worked best to deliver the intended outcomes and value for money. So, for instance, will the programme deliver better outcomes if it is demand driven, giving beneficiaries greater choice with regard to the service they require, or if it is supply driven and government designs a service with many

50 World Bank. Operations Evaluation Department. 2004. Monitoring and Evaluation. Some tools, methods and approaches.

Page 48: Public Policy Evaluation

40

of the elements pre-decided? Or, what exactly will the role of different spheres of government be in the delivery of the programme and what institutional configuration will deliver the best outcomes? In this regard an evaluation can either be formative, that is an evaluation conducted early on in the programme or while it is still in the design phase, to help make design decisions or improve the performance of the programme, or a summative evaluation at the end of the programme, to determine whether intended outcomes have been achieved and what the main determinants of success were51.

Box 6. Programme evaluation

Programme evaluation elements

1. Success of the programme.

2. Needs of citizens.

3. The societal problem the programme is supposed to address (for example, poverty, crime, environmental degradation).

4. The environment or context in which the programme will be implemented (for example, the political, social, economic environment).

5. Programme design:

5.1 Objectives of the programme.

5.2 The target population the programme is intended to benefit.

5.3 The course of action government intends to take to address the identified needs or societal problems. Alternative courses of action can also be viewed as alternative strategies, means or instruments to achieve desired ends. Instruments that may be used include a service, a financial grant, regulation of an activity, funding or subsidisation of an activity, or the provision of infrastructure. Some theory explaining why it is expected that the chosen instrument will work, and under what conditions, may be used to justify the choice of instrument. The conditions determining success can also be called critical success factors. For instance, the success of regulation may depend on the capacity to enforce the regulation. The comparative cost and benefits of alternative courses of action may also be considered.

5.4 The risks associated with the course of action.

5.5 Legal enablement of the course of action (should a law be enacted or changed?).

5.6 Control over or governance of bodies empowered to take a course of action, especially if it affects the rights of citizens.

4051 OECD. Glossary of Key Terms in Evaluation and Results Based Management. 2002.

Page 49: Public Policy Evaluation

41

5.7 The scale of the programme. Scale includes the proportion of the population that will benefit from the programme (its reach) and the level of service. The scale will depend on the level of funding that the programme can attract.

5.8 The institutional arrangements for delivery of the programme. This may include government departments, public entities, private institutions and institutions in the national, provincial, or local sphere of government.

5.9 Procedures for implementing the chosen course of action.

5.10 The human resource capacity available to implement the chosen course of action.

5.11 Effective leadership and management of the programme.

5.12 Government policy with regard to all of the above elements.

6. Implementation of the programme.

Factors determining programme success

A programme evaluation will assess –

• the success of the programme;

• the success of the programme in relation to the needs of citizens and the societal problem the programme is supposed to address;

• contextual factors that may have influenced the success of the programme;

• how the design of the programme determined its success; and

• how the implementation of the programme determined its success.

Programmes are complex and not all elements are pre-designed or are implemented as planned. The form that many of the elements take may emerge as the programme is implemented and adjustments are made based on experience. M&E provides the evidence for decisions on what adjustments to make.

Related concepts

Sometimes the concept of “service delivery model” is used to capture the elements of programme design. When designing a delivery programme several institutional models or delivery strategies are possible. For instance, to ensure food security, government can give income support, bake bread, buy and distribute bread, subsidise the price of bread, regulate the prices of flower or other inputs in the value chain, or promote the establishment of vegetable gardens. The service delivery model, or the programme design, entails the creative combination or mix of all the design options to deliver the best outcomes or value for money.

Page 50: Public Policy Evaluation

42

5.2 Logic models

A simplified way to conceptualise a programme is to use a logic model. Logic models are dealt with in this document because the framework is widely used, but as was explained in the foregoing section, many more elements than the elements of the logic model are important for the evaluation of the success of a programme.

Logic models help to explain the relationship between means and ends. A simplified logic model consists of the hierarchy of inputs, activities, outputs, outcomes and impacts – see Figure 4 below52.

Figure 4. Components of the logic model

A logic model is an analytical method to break down a programme into logical components to facilitate its evaluation. A logic model helps to answer questions like “Have the objectives of the programme been achieved?” and “Were the means to achieve those objectives appropriate and were they competently implemented?” Since efficiency can be defined as the ratio between inputs and outputs and effectiveness as the relationship between outputs and outcomes, logic models help to evaluate the efficiency and effectiveness of a programme. Logic models are used very widely as frameworks to design monitoring systems or structure evaluations.

Figure 4: Components of the logic model

What we use to do the work?

Impacts

Outcomes

Outputs

Activities

Inputs

What we aim to change?

What we wish to achieve?

What we produce or deliver?

What we do?

Manage towardsachieving results

Plan, Budget,Implement

52 National Treasury. 2007. Framework for Managing Programme Performance Information. Page 6.

Page 51: Public Policy Evaluation

43

The logic model has also been described as:

“The logic model helps to clarify the objectives of any project, program, or policy. It aids in the identification of the expected causal links – the “program logic” – in the following results chain: inputs, process, outputs (including coverage or “reach” across beneficiary groups), outcomes, and impact. It leads to the identification of performance indicators at each stage in this chain, as well as risks which might impede the attainment of the objectives. The logic model is also a vehicle for engaging partners in clarifying objectives and designing activities. During implementation the logic model serves as a useful tool to review progress and take corrective action”53.

A logic model (of which there are different varieties) can be explained by the logic of a production process. In a production process resources like staff, equipment and materials are used to yield some product or service. The process itself consists of the tasks required to produce something (for example collecting information, verifying the information, analysing the information, drawing conclusions and writing a report), and applies knowledge and technologies that have been developed over time. The product or service that is produced may be valuable in itself but especially in the case of public functions some policy objective is also pursued. For example, the regulation of competition in the economy is not done for its own sake but to improve economic growth or ensure better prices for the consumer. (See Figure 5.) In this example, the public good that is pursued is not the regulation of competition itself (which may even be experienced negatively by those affected by it) but the economic effect of the regulation.

The definitions of the commonly used components of logic models are the following54:

• “Inputs. All the resources that contribute to production and delivery of outputs. Inputs are ‘what we use to do the work’. They include finances, personnel, equipment and buildings.

• Activities. The processes or actions that use a range of inputs to produce the desired outputs and ultimately outcomes. In essence, activities describe ‘what we do’.

• Outputs. The final products, or goods and services produced for delivery. Outputs may be defined as ‘what we produce or deliver’.

• Outcomes. The medium-term results for specific beneficiaries that are a logical consequence of achieving specific outputs. Outcomes should relate clearly to an institution’s strategic goals and objectives set out in its plans. Outcomes are ‘what we wish to achieve’.

• Impacts. The results of achieving specific outcomes, such as reducing poverty and creating jobs.” See Box 7 for “secondary impacts”55.

53 World Bank. Operations Evaluation Department. 2004. Monitoring and Evaluation. Some tools, methods and approaches.54 National Treasury. 2007. Framework for Managing Programme Performance Information. Page 6.55 Auditor-General. 1998. Audicom – Audit Information Manual. Best Practices, Volume 3. Information for good governance.

Page 52: Public Policy Evaluation

44

Box 7: Secondary impacts

The extent to which unintended impacts are occurring as a result of the department’s products or services, or ways of producing them, or doing business. These considerations extend to matters such as health, safety, environment and the impact on the communities in which the department operates.

5.3 Results-Based Management

The emphasis in programme evaluation is not only on whether government departments have undertaken their mandated activities, accountable spending of the budgets associated with those activities, and whether all applicable laws have been complied with, but on the results achieved by those activities in relation to government objectives like safety, employment and development. Programme evaluation is therefore closely related to Results-Based Management, which has been defined as –

“a management strategy focusing on performance and achievement of outputs, outcomes and impacts”56.

5.4 Theory-based evaluation

Theory-based evaluation has been defined as:

“Theory-based evaluation has similarities to the logic model approach but allows a much more in-depth understanding of the workings of a program or activity – the “program theory” or “program logic.” In particular, it need not assume simple linear cause-and-effect relationships. For example, the success of a government program to improve literacy levels by increasing the number of teachers might depend on a large number of factors. These include, among others, availability of classrooms and textbooks, the likely reactions of parents, school principles and schoolchildren, the skills and morale of teachers, the districts in which the extra teachers are to be located, the reliability of government funding, and so on. By mapping out the determining or causal factors judged important for success, and how they might interact, it can be decided which steps should be monitored as the program develops, to see how well they are in fact borne out. This allows the critical success factors to be identified. And where the data show these factors have not been achieved, a reasonable conclusion is that the program is less likely to be successful in achieving its objectives”57.

56 Organisation for Economic Cooperation and Development (OECD). Glossary of Key Terms in Evaluation and Results Based Management. 2002.

57 World Bank. Operations Evaluation Department. 2004. Monitoring and Evaluation. Some tools, methods and approaches.

Page 53: Public Policy Evaluation

45

Outcome chains

In government programmes, outputs are designed to bring about outcomes and the outcomes in turn to bring about, or cause, further, higher level, outcomes, or impacts. These are referred to as outcome chains. Sometimes the concepts impact, aim, goal, purpose, effect or result are used for the higher order outcomes58. These concepts are sometimes further qualified by putting the word strategic in front of them, presumably to indicate the higher level outcomes. Indeed, in the Estimates of National Expenditure the concepts aim and purpose are used to refer to higher level outcomes. The DFID guidelines on logical frameworks use the terms purpose and goal for the higher level outcomes59. An example of an outcome chain is given in Figure 5.

Figure 5. An example of an outcome chain

The logic of outcome chains implies that the success of a programme depends on the robustness of the causal relationships in the chain. The theory underlying the causal relationship between outputs and outcomes and between outcomes on different levels in an outcome chain needs to be made explicit. Sometimes the theory about this causal relationship is tenuous or controversial and in other instances the theory might be generally accepted or even proven by science. See the example in Box 8.

Box 8. Example of a theory underlying the causal relationship between outputs and out-comes

Generally, if learner-teacher contact time is increased, it can be expected that educational outcomes will be improved. (The problem for education managers then becomes how to increase learner-teacher contact time.)

(Note: Though some studies have confirmed this relationship, this is not the only determinant of educational outcomes. The purpose of the example is not to start an education theory debate but to emphasise the point that any intervention to improve educational outcomes should be based on sound educational theory and that the theory underlying the intervention should be made explicit. In real life solutions will be much more complicated.)

58 See for instance Department for International Development (DFID). Tools for Development. Version 15 September 2002: Chapter 5: Logical Frameworks and Kusek, JZ and Rist, RC, 2004. Ten Steps to a Results-Based Monitoring and Evaluation System. The World Bank.

59 UK. DFID. Tools for Development, Version 15, September 2002, Chapter 5.

Figure 5: An example of an outcome chain

Regulation ofcompetition

More efficientfirms

Internationalcompetitiveness More exports Economic Growth

OR

Regulation ofcompetition

Monopolisticconditions removed

More firms enterthe marketcompeting onprice

Better prices forthe consumer

Page 54: Public Policy Evaluation

46

Attribution

A good societal effect or outcome, especially outcomes on the higher levels, may have several contributing factors. It is not to say that a specific government programme caused that effect. For instance, economic growth or job creation may be attained despite programmes to promote SMMEs. In other words, even if a specific SMME may have benefited from a financing programme, it does not necessarily follow that the programme led to significant job creation in the broader economy, or that if such job creation did occur, that it is attributable to the specific programme and not to some other more significant factor. Further, several programmes may contribute to the same outcome. The difficulty of attributing outcomes to specific programmes is called the “attribution gap”60 which means that no direct relationship between the direct benefits to targeted beneficiaries of the programme and indirect benefits to broader society can be proven easily as too many factors are involved to clearly isolate the effect of a single intervention. For instance, in a poverty reduction programme an Income Generating Project may benefit the beneficiaries who are members of the project but the programme may have no poverty reduction effect in the broader community. Caution should therefore be exercised when an outcome is attributed to the efforts of a specific programme.

60 GTZ. Results-based Monitoring Guidelines for Technical Cooperation Projects and Programmes. May 2004.

Page 55: Public Policy Evaluation

Cha

pter

Six

Applying the Concepts

47

Page 56: Public Policy Evaluation

4848

This Chapter deals with the following:

• Focusing monitoring and evaluation

• Designing monitoring frameworks

• Framing evaluation questions

• Examples of monitoring and evaluation from different perspectives

6.1 Focusing monitoring and evaluation

As emphasised in Chapter 3, evaluation perspectives point to the main focuses of an evaluation and help to give a balanced view of performance. In Chapter 4 the value base of evaluation and how values are used to define criteria and standards of performance, were explained. It was also pointed out that evaluation of the Public Service from a particular perspective employs analytical frameworks, models, theories and methodologies unique to that particular perspective. Some models for evaluating programme performance were discussed in Chapter 5.

In this chapter the application of the concepts introduced in the previous chapters are discussed. The concepts and their component elements can be viewed as different dimensions of performance and can be used as frameworks to analyse performance.

A very basic question asked when a monitoring system must be developed or when an evaluation is planned is: What to monitor or evaluate, that is, what should the focus of the monitoring or the evaluation be? The main clients of the monitoring system or the evaluation could be asked what they want to be monitored or evaluated, but it is invariably left to M&E professionals to answer this basic question.

This chapter explains how the concepts discussed in the previous chapters could be used to answer the what question, specifically, how the concepts can be used for –

(1) designing monitoring frameworks; and

(2) framing evaluation questions.

6.2 Designing monitoring frameworks

Different monitoring frameworks may be designed depending on the intended focus of the monitoring (or the perspective). Monitoring from a financial management perspective, for example, may include monitoring of expenditure against budget or monitoring whether financial prescripts and controls are adhered to. Frameworks, rules and conventions for financial monitoring are well-established in the Public Service. Despite this, shortcomings in financial monitoring remain a main reason for qualified audit outcomes.

Page 57: Public Policy Evaluation

49

Monitoring from the perspective of programme or service delivery performance involves the monitoring of performance against pre-set objectives, indicators and targets.

Key concepts related to performance monitoring include:

• Objective: A description of the aim or purpose of an activity.

• Indicator: “Identifies specific numerical measurements that track progress towards achieving a goal”61.

• Target: “Expresses a specific level of performance that the institution, programme or individual aims to achieve within a given period”62.

• Baseline: “The current performance levels that an institution aims to improve when setting performance targets”63. A baseline is also a measurement of the current societal conditions that a government programme of action aims to improve.

In practical terms the monitoring involves the routine collection of data on all the indicators in strategic and performance plans and preparation of reports to managers on different levels on the values of the indicators compared to a baseline or target. A simple example is monitoring whether the projects in a performance plan have been completed by the target dates set in the plan.

Objectives, indicators and targets could be set for a selection of the concepts (where each concept represents a dimension of performance) explained in the previous chapters. For example:

• Perspectives: Financial objectives, service delivery objectives, human resource management objectives.

• Logic model: Impact objectives, outcome objectives, output objectives.

• Values: Ethics objectives, efficiency objectives and equity objectives.

An example of objectives and indicators according to the logic model is given in Table 2.

61 National Treasury. 2007. Framework for Managing Programme Performance Information. Page 21.62 Ibid. Page 2263 Ibid. Page 21.

Page 58: Public Policy Evaluation

50

Table 2. Examples of objectives and indicators according to the logic model

Measurable objective Performance indicatorInput To put the basic infrastruc-

ture for public ordinary schooling in place in accordance with policy.

• Percentage of public ordinary schools with a water supply.

• Percentage of public ordinary schools with electricity.

• Percentage of public ordinary schools with at least two functional toilets per classroom.

Expenditure on maintenance as a percentage of the value of school infrastructure.

To provide adequate human resourcing in public ordinary schools.

• Percentage of schools with less than 40 learners per class.

To provide adequate learner and teacher support materials to public ordinary schools.

• Percentage of non-Section 21 schools with all LTSMs and other required materials delivered on day one of the school year.

Process To bring about effective and efficient self-managing public ordinary schools.

• Percentage of schools with Section 21 status.

To foster a culture of effective learning and teaching in public ordinary schools.

• Percentage of working days lost due to educator absenteeism in public ordinary schools.

• Percentage of learner days lost due to learner absenteeism in public ordinary schools.

Output. The educa-tion output is hours of teaching; or periods taught; or lessons taught; or learner-teacher contact hours

To increase the number of hours of actual teaching.

• Percentage of actual teaching hours.

Page 59: Public Policy Evaluation

51

Measurable objective Performance indicatorOutcome, Level 1 To ensure that an adequate

proportion of the population attains Grade 12, in particu-lar with mathematics and sci-ence passes.

• Pass ratio in Grade 12 examinations.

• Pass ratio in Grade 12 for mathematics and science.

Outcome, Level 2 (or impacts)

To attain the highest pos-sible educational outcomes amongst learners in public primary schools.

To attain the highest pos-sible educational outcomes amongst learners in public secondary schools.

• Percentage of learners in Grade 3 attaining acceptable outcomes in numeracy, literacy and life skills.

• Percentage of learners in Grade 6 attaining acceptable outcomes in numeracy, literacy and life skills.

• Percentage of learners in Grade 9 attaining acceptable educa-tional outcomes.

An important point to remember when developing monitoring frameworks is that performance should only be monitored along a few important dimensions, to keep the framework manageable and simple.

Different monitoring frameworks should be designed for managers on different levels. Managers on the operational level are responsible for the efficiency of day to day operations, for example whether the available medical personnel can attend to the queue at the outpatients department of a district hospital on a specific day, whilst managers on higher levels are responsible for the performance of the system as a whole with regard to specified policy objectives.

Frameworks of objectives, indicators and targets have been standardised for some public service sectors, including agriculture, education and health64 because if everybody use the same indicators it is possible to compare performance between departments and over time. These sectors would therefore use the standardised frameworks but could add their own objectives, indicators and targets to the standardised set.

6.3 Framing evaluation questions

“Framing an evaluation”65 means deciding what to evaluate (the subject of the evaluation), the purpose of the evaluation and what the main evaluation questions will be. All the concepts explained in the previous chapters could prompt specific evaluation questions.

64 See for example: National Treasury. 2003. Customised Framework and Format for Strategic Plans of the Provincial Departments for Agriculture; National Treasury. 2006. Format for Annual Performance Plans for provincial education departments; National Treasury. 2007. Format for annual performance plans of provincial health departments. Prepared for the technical committee of the National Health Council.

65 Royal Melbourne Institute of Technology: CIRCLE (Collaboration for Interdisciplinary Research, Consultation and Learning in Evaluation). The Evaluation Menu.

Page 60: Public Policy Evaluation

52

The concepts could therefore serve as a guide to the framing of an evaluation. In Table 3 a list of questions with regard to the housing programme is provided as illustration of this proposition. The example questions are paired with selected concepts from the framework to show how the concepts prompted the evaluation questions.

Table 3. Examples of evaluation questions: Housing Programme

Concept Evaluation questionLogic model: Outputs, outcomes, impact

• What were the objectives of the programme and how well were they achieved?

• Did a secondary housing market develop so that beneficiaries could realise the economic value of their asset?

• Did the housing programme create sustainable human settlements? (Included in this concept are informal settlement upgrade, promoting densification and integration, enhancing spatial planning, enhancing the location of new housing projects, supporting urban renewal and inner city regeneration, developing social and economic infrastructure, designing housing projects so that they support informal economic activity and enhancing the housing product.)

Programme Design • What are the design features of the programme with regard to:

o The service delivery model. (Will government build houses, finance housing or subsidise housing?)

o The financial contribution of government to each household.

o The access mechanism: Will people apply for houses (demand driven strategy) or will government undertake housing projects where the need has been identified by officials (supply driven strategy)?

o The size and quantity of houses.

o The location of housing projects.

o Town planning patterns.

o The types of units that are provided (family units or rental housing).

Page 61: Public Policy Evaluation

53

Concept Evaluation question o The configuration of institutions through which the

programme will be delivered (In the case of housing the programme is delivered through government departments on national, provincial and local level plus financial institutions, housing institutions (landlords), consultants, developers and building contractors.)

o The housing process. (From identifying the need, planning, budgeting, packaging the project, project approval, building and inspection, to transfer of the houses to beneficiaries.)

• How did these design features contribute to the success of the programme?

• How flexible was the programme design so that creative solutions were possible on the project level?

• How well was the programme implemented?Values: Responsiveness to needs

• How were housing needs identified?

• How quickly is government able to respond to dire housing needs?

• How were the beneficiaries consulted and how much choice did they have?

Values Targeting • Who were the targeted beneficiaries and how well were these targeted groups reached?

• What are the eligibility criteria and are really poor people not excluded by the way the programme is implemented?

• Was there any special dispensation for vulnerable groups like women, disabled people, children and youth?

Values: Scale of engagement

• What is the scale of the programme and how does it compare to housing needs?

The design of an evaluation (determining the methodology to be used) depends on the evaluation questions posed. The evaluation questions determine the scope of the evaluation, the type of evaluation and the methodologies used, and consequently the outcomes of the evaluation.

Evaluation from a particular perspective usually involves the application of a unique set of frameworks, concepts, methodologies, conventions and protocols that have emerged for that perspective over time. For instance, a set of financial statements is the main source of information for evaluating financial performance and the analysis and interpretation of financial statements (to gain evaluative insight) are done using accepted methodologies.

Page 62: Public Policy Evaluation

54

6.4 Examples of monitoring and evaluation from different perspectives

In the previous section it was emphasised that various concepts discussed in this text may prompt different evaluation questions and that the evaluation question then determines the type of evaluation that may be undertaken. The table below lists types of evaluations paired with selected evaluation perspectives and the values that may receive relative greater emphasis when such a type of evaluation is undertaken.

Different types of evaluation are appropriate for answering the many questions that may arise when complex programmes are implemented. There is no “one size fits all” evaluation template to put against the variety of questions. It is important for managers and other stakeholders to have an understanding of what they want to know from M&E. Likewise, it is important for the evaluators to understand what is needed by the manager or stakeholder.

A specific type of evaluation will only answer the questions the evaluation is designed for. To get as complete as possible a picture of the performance of a department or a delivery programme, and to assist with the improvement of processes and eventually performance, more than one evaluation will probably have to be undertaken. (For example, impact evaluations and process re-engineering will probably not be undertaken together.) The important issue is however not the specific type of evaluation because, as explained above, there is no set of standard evaluation templates. The evaluation will have to be designed to fit the concerns of the users of the M&E. Broadly, monitoring answers pre-set questions and consequently a monitoring system works with fairly fixed templates. Evaluation on the other hand, tries to answer more fundamental questions and is designed around the specific questions at the time.

The types of monitoring and types of evaluation listed in Table 4 illustrate the richness of the field of monitoring and evaluation. The types range from the strategic level where fundamental questions about impact and alternative delivery strategies and models are asked to evaluations where the policy issues are accepted as given and questions about the efficiency of processes and public administration practices are asked.

Page 63: Public Policy Evaluation

55

Table 4. Types of monitoring and evaluation in relation to evaluation perspectives and values

Evaluation perspective Type of monitoring or evaluation ValuesFinancial perspective Monitoring through monthly and

annual financial statements. Financial statements try to answer the following questions: Was money spent as appro-priated, has the income that accrued to government been collected, were assets protected, can the department meet its liabilities and has the department adhered to sound financial controls?

Since financial accounting answers very basic questions some departments are trying to introduce management accounting with tasks of analysing and interpreting financial information, cost-ing services, advising managers on the financial implications of strategic decisions, advising on choosing between alternative strategies, and directing attention to and helping managers to solve problems.

Accountability

Economy, efficiency, effectiveness

Value for Money evaluationsCost-benefit analysis, investment analysis, expenditure reviews and efficiency/productivity reviews.

Value for Money

Ethical perspective Evaluation of the Ethics Infrastructure of the department, including assigning of responsibility for the ethics function, reporting ethical and legal violations and protection of employees who do such reporting, disclosure of conflict of interest, ethics training, pre-employment screening and risk assessment, detection, investigation and prosecution of corruption and discipline procedures.

Evaluation of compliance with the Code of Conduct.

A high standard of professional ethics

Services must be pro-vided impartially, fairly, equitably and without bias.

Transparency

Page 64: Public Policy Evaluation

56

Evaluation perspective Type of monitoring or evaluation ValuesHuman Resource Management perspective

Human Resource Management best practice evaluations.

Monitoring of Human Resource Man-agement performance indicators, like the skills gap, representivity, staff turnover and vacancy rates.

Good human resource management and career development practices, to maximise human potential, must be cultivated.

Public administration must be broadly repre-sentative of the South African people, with employment and per-sonnel manage-ment practices based on ability, objectivity, fair-ness, and the need to redress the imbalances of the past to achieve broad representation.

Programme performance perspective

Monitoring pre-set performance indi-cators: Routine collection of data on all the performance indicators in strategic and performance plans and prepara-tion of reports to managers on different levels on the values of the indicators compared to the baseline or compared to the target.

EffectivenessDevelopment orientationService standardsAppropriatenessSustainabilitySecondary impactsResponsiveness to needs

Programme evaluations, compris-ing clarification of and agreement on detailed programme objectives, prepara-tion of a log frame analysis, desk review, and analysis of existing data. This evalu-ation will primarily evaluate how well a programme has been implemented, taking policy issues and the design pa-rameters of the programme as given66.

66 Mackay, K. Institutionalisation of Monitoring and Evaluation Systems to Improve Public Sector Management. Evaluation Capacity Development Working Paper Series, No 15. World Bank, Independent Evaluation Group, 2006.

Page 65: Public Policy Evaluation

57

Evaluation perspective Type of monitoring or evaluation ValuesRigorous impact evaluations or policy analyses, answering the question: Were the outcome objectives achieved and did the adopted strategies work and if not, why not? It comprises primary data collection, and often the use of sophisticated methodologies to establish attribution of outcomes to programme outputs, or to prove causality between outputs and outcomes. This type of evaluation will also question alternative programme designs, programme strate-gies or policy options67.

Review of the service delivery model. When designing a delivery programme several institutional models or delivery strategies are possible. For instance, to ensure food security, government can give income support, bake bread, buy and distribute bread, subsidise the price of bread, regulate the prices of flower or other inputs in the value chain, or promote the establishment of vegetable gardens. The service delivery model entails the creative combination or mix of all the design options to deliver the best outcomes or value for money.

Organisational performance perspective

Organisational reviews covering struc-tures, systems, management processes and operational processes. Included under this broad category are reviews of organisation structure, organisational performance reviews, management audits, organisational diagnosis, organi-sational development, quality assurance, best practice evaluations and process re-engineering.

Efficiency

Sustainability

67 Mackay, K. Institutionalisation of Monitoring and Evaluation Systems to Improve Public Sector Management. Evaluation Capacity Development Working Paper Series, No 15. World Bank, Independent Evaluation Group, 2006.

Page 66: Public Policy Evaluation

58

Evaluation perspective Type of monitoring or evaluation ValuesCitizen perspective Consultative evaluations, including satis-

faction surveys and Citizens’ Forums.

Participatory evaluations

ResponsivenessParticipation in policy makingParticipation in policy makingConsultationAccessCourtesyInformationRedressRelevanceAcceptance

Examples of types of evaluation for the housing programme, typified by perspective and logic model level, are given in Box 9.

Box 9. Illustrations of types of evaluations: Housing Programme

One could, for example, undertake these types of evaluations:

1. An evaluation on the output level

The evaluation could be designed around the following evaluation questions:

• What is the current demand for housing, or what is the housing shortage? How is the demand growing every year? What factors determine the demand for housing? How is this demand spatially distributed?

• How many houses are the various housing departments delivering and how well is it meeting the demand?

2. An evaluation on the activity (process) level

The evaluation could be designed around the following evaluation questions:

• Is the process of delivering houses designed efficiently so that houses are de-livered in a reasonable time after a need has been identified and is the process delivering houses at a steady and acceptable rate?

• What are the main reasons why there are so many uncompleted or failed proj-ects?

• Is the current institutional set-up conducive to the delivery of housing? Are there enough institutional capacity to successfully implement the programme? Are the roles of each institution in the housing value chain clear?

Page 67: Public Policy Evaluation

59

3. An evaluation on the outcome level

If an outcome objective for the housing programme should be that apart from receiving a house the beneficiary should also be able to leverage her housing asset to improve her economic circumstances, the evaluation could be designed around the following evaluation questions:

• To what extent has a secondary housing market developed around government housing projects? How many people sold their houses? How many people rented out their houses? Did people improve their houses? Could people borrow money against their houses?

• To the extent that this did not happen, what conditions need to be created to promote a secondary housing market?

• To what extent is new economic activity associated with housing projects?

4. Evaluation from an ethical perspective

The evaluation could be designed around the following evaluation questions:

• What is the extent of corruption in housing departments and on the part of housing contractors?

• Is their any patronage, corruption, unfairness, bias, or inequity in the process of allocation of housing or identifying and approving housing projects or identifying housing beneficiaries or testing their eligibility?

• To what extent is there abuse of the system on the part of the public/ housing beneficiaries?

Page 68: Public Policy Evaluation

Cha

pter

Sev

en

Summary

60

Page 69: Public Policy Evaluation

61

This document set out to –

• clarify basic M&E concepts and ideas as they apply in the context of the SA Public Ser-vice;

• put the concepts in a framework showing the interrelationships between them;

• contribute to the development of a coherent and dynamic culture of monitoring and evaluation in the Public Service; and

• contribute to a better understanding and enriched debate about the different dimensions of public sector performance.

The document therefore started off with explaining the concepts of monitoring and evaluation.

The importance of monitoring and evaluation as a tool for improving the results achieved by government programmes was emphasised. Monitoring and evaluation pre-supposes an openness to continuously evaluate the success of what we are doing, diagnose the causes of problems and devise appropriate and creative solutions. Monitoring and evaluation can, however, only be influential if it provides quality analytical information and if decision-makers are willing to consider and act on that information.

Attention was also given to some contextual issues, namely that M&E in South Africa is practiced in the context of the ideal to create a developmental state, the relationship of monitoring and evaluation with policy making and planning, the emerging Government-wide M&E System and important institutions with a role in M&E.

With the inception of this document the main aim was to clarify basic monitoring and evaluation concepts. All the concepts, however, do not have the same status. For example, some have the status of values and some groups of concepts the status of analytical frameworks.

A very basic question asked when a monitoring system must be developed or when an evaluation is planned is: What to monitor or evaluate, that is, what should the focus of the monitoring or the evaluation be? The main clients of the monitoring system or the evaluation could be asked what they want to be monitored or evaluated, but it is invariably left to M&E professionals to answer this basic question.

The subject of an evaluation may be a system, policy, programme, service, project, institution, process or practice. All these entities are intended to do something or to result into something. Thus, their performance can be evaluated. However, performance can be viewed from different perspectives and many concepts are used to describe performance.

Consequently, the idea of evaluation perspectives was introduced to emphasise different dimensions of performance. A number of values were further defined and their usefulness to recognise success or excellence, explained. Specific frameworks to analyse programme performance were also introduced.

Page 70: Public Policy Evaluation

62

Evaluation perspectives point to the main focuses of an evaluation. Perspectives include, for example, a financial management perspective, human resource management perspective or the results achieved by the programmes offered by a department. Evaluation from different perspectives usually implies the application of approaches and methodologies unique to that perspective. It follows that many more concepts and models, in addition to the few that were introduced in this document, are relevant to the practice of evaluation.

The central idea behind evaluating performance from different perspectives is to use a balanced set of perspectives for the evaluation.

Some frameworks to specifically evaluate programme performance were introduced. Programme evaluation is the evaluation of the success of a programme and how the design and implementation of the programme contributed to that success. It examines the relationship between the success of the programme, its design, and the meticulousness of the implementation of that design. Design includes government policies to address an identified societal problem and all the various elements that make up the course of action government has decided upon or the services it delivers. A simplified model to conceptualise programme performance is the logic model. Logic models help to define the relationship between means (inputs, activities and outputs) and ends (outcomes and impacts), and are therefore useful tools for evaluating programme performance.

Values are important to define the criteria and standards of performance. The concepts in this category have the status of values. This status has been assigned to them by the Constitution. Section 195 (1) of the South African Constitution states that “public administration must be governed by the democratic values and principles enshrined in the Constitution”. These include values like efficiency, effectiveness, responsiveness to needs and equity. The values and principles define what is regarded as good public administration or good performance.

The values provide additional perspectives from which public administration may be evaluated. For example, the principle of responsiveness to needs prompts one to evaluate performance from the perspective of the needs of clients, or the principle of development orientation requires that the fundamental nature of the Public Service as an instrument for development should be evaluated.

The document lastly proposed how the various concepts could be applied in the practice of monitoring and evaluation. It is suggested that it could be used for –

• designing monitoring frameworks; and

• framing evaluation questions.

Monitoring of programme performance involves the monitoring of performance against pre-set objectives, indicators and targets. In practical terms monitoring involves the routine collection of data on all the indicators in strategic and performance plans and preparation of reports to managers on different levels on the values of the indicators compared to a baseline or target.

62

Page 71: Public Policy Evaluation

63

The concepts discussed in this text could be viewed as different dimensions of performance and objectives, indicators and targets could be set for various dimensions. The logic that is followed is that objectives, indicators and targets could be defined for selected evaluation perspectives, as well as for selected values and principles. M&E of programme performance requires that objectives, indicators and targets be set for key dimensions of the programme design and programme success. For example: Financial objectives (perspective), outcome objectives (logic model), and equity objectives (value).

The design of an evaluation depends on the evaluation questions posed. The concepts discussed in this text could prompt specific evaluation questions. The framework could therefore serve as a guide to the framing of an evaluation. The evaluation questions in turn determine the scope of the evaluation, the type of evaluation and the methodologies used, and consequently the outcomes of the evaluation.

Different types of monitoring or evaluation are appropriate for answering the many questions that may arise when complex programmes are implemented. There is no “one size fits all” monitoring or evaluation template to put against the variety of questions. It is important for managers and other stakeholders to have an understanding of what they want to know from M&E. Likewise, it is important for the evaluators to understand what is needed by the manager or stakeholder.

The important issue is that the monitoring system or evaluation has to be designed to fit the concerns of the users of the M&E. Broadly, monitoring answers pre-set questions and consequently the monitoring system works with fairly fixed templates. Evaluation on the other hand, tries to answer more fundamental questions and is designed around the specific questions at the time.

It is hoped that this document will contribute to better understanding and enriched debate about the different dimensions of public sector performance, and to improved design of monitoring frameworks as well as framing of evaluations.

Page 72: Public Policy Evaluation

List of Sources Consulted

64

Page 73: Public Policy Evaluation

65

• Auditor-General. Audicom—Audit Information Manual. 2000.

• Canada, British Columbia. Auditor-General. Deputy Minister’s Council. Enhancing Accountability for Performance: A Framework and Implementation Plan. Second Joint Report. 1996

• Cloete, F. Measuring good governance in South Africa. November 2005. (Unpublished article.)

• Department of Health. A District Hospital Service Package for South Africa: A set of norms and standards. 2002.

• Department of Health. Health Goals, Objectives and Indicators: 2001-2005.

• Department of Health. Format for Annual Performance Plans of Provincial Health Departments for financial years 2008/09 to 2010/11. Updated May 2007.

• Department for International Development (DFID). Tools for Development. Version 15, September 2002.

• Department of Public Service and Administration (DPSA). White Paper on Transforming Public Service Delivery, 1997 (Government Gazette No 18340 of 1 October 1997.)

• DPSA. The Machinery of Government: Structure and Functions of Government. May 2003.

• Deutsche Gesellschaft fur Technische Zusammenarbeit (GTZ). Results-based Monitoring: Guidelines for Technical Cooperation Projects and Programmes. May 2004.

• Fox, W, Schwella, E and Wissink, H. Public Management. Juta, Kenwyn. 1991.

• Health Systems Trust. District Health Barometer 2005/06. December 2006.

• Human Rights Commission. 2006. 6th Economic and Social Rights Report.

• Kaplan, RS and Nortan, DP, 1996. The Balanced Scorecard. Harvard Business School Press, Boston, Massachusetts.

• Kusek, J.Z and Rist, RC. 2004. Ten Steps to a Results-Based Monitoring and Evaluation System. Washington, DC: The World Bank.

• Mackay, K. Institutionalisation of Monitoring and Evaluation Systems to Improve Public Sector Management. Evaluation Capacity Development Working Paper Series, No 15. World Bank, Independent Evaluation Group, 2006.

• National Treasury. Budget Review 2007. 21 February 2007.

• National Treasury. Provincial Budgets and Expenditure Review 2003/04 – 2009/10. September 2007.

Page 74: Public Policy Evaluation

66

• National Treasury. Framework for Managing Programme Performance Information. May 2007.

• National Treasury. 2007. Format for annual performance plans of provincial health departments. Prepared for the technical committee of the National Health Council.

• National Treasury. Guide for the Preparation of Annual Reports for the year ended 31 March 2007.

• National Treasury. Local Government Budgets and Expenditure Review 2001/02 – 2007/08. October 2006.

• National Treasury. Guide for the preparation of annual reports for the year ending 31 March 2006. Annexures 1 to 7.

• National Treasury and Department of Education. PFMA Planning and Reporting Requirements for Provincial Education Departments: “The Manual”. 2005. This document has been updated by a prescribed Format for Annual Performance Plan, 23 February 2006.

• National Treasury. 2006. Format for Annual Performance Plans for provincial education departments.

• National Treasury. Medium Term Expenditure Framework Treasury Guidelines: Preparing budget proposals for the 2005 MTEF. 2004.

• National Treasury. Framework and Templates for provincial departments for the preparation of Strategic and Performance Plans for 2005-2010, and Annual Performance Plans for the 2005 financial year. 16 August 2004.

• National Treasury. 2003. Customised Framework and Format for Strategic Plans of the Provincial Departments for Agriculture.

• Organisation for Economic Cooperation and Development (OECD). Glossary of Key Terms in Evaluation and Results Based Management. 2002.

• The Presidency, Policy Coordination and Advisory Services (PCAS). Towards a Ten Year Review: Synthesis report on implementation of government programmes. October 2003.

• The Presidency. Proposal and Implementation Plan for a Government-Wide Monitoring and Evaluation System: A Publication for Programme Managers. September 2005.

• The Presidency. Development Indicators Mid-Term Review. 2007.

• Public Service Commission. Evaluation of departments’ annual reports as an accountability mechanism. 1999.

• Public Service Commission. Ethics Survey. 2001.

• Public Service Commission. State of the Public Service Report. 2004, 2005 and 2006.

66

Page 75: Public Policy Evaluation

67

• Public Service Commission. 2006. Public Service Monitoring and Evaluation System: Research Guide, Assessment Questionnaire and Reporting Template. 10 July 2006.

• Public Service Commission. Assessment of Professional Ethics in the Free State. March 2007

• Public Service Commission. Report on the Audit of Reporting Requirements and Developmental Monitoring and Evaluation Systems within National and Provincial Government. June 2007.

• Royal Melbourne Institute of Technology: CIRCLE (Collaboration for Interdisciplinary Research, Consultation and Learning in Evaluation). The Evaluation Menu.

• RSA. Constitution of the Republic of South Africa, 1996.

• RSA. Public Finance Management Act, 1999 (Act No 1 of 1999).

• Vera Institute of Justice. Measuring progress toward Safety and Justice: A global guide to the design of performance indicators across the Justice Sector. 2003.

• World Bank. Operations Evaluation Department. 2004. Monitoring and Evaluation. Some tools, methods and approaches.

Page 76: Public Policy Evaluation

Inde

x

68

Page 77: Public Policy Evaluation

69

Acceptance 29

Access 27

Accountability 4, 25

Activities 33

Advocacy 4

Appropriateness 28

Attribution 35

Auditor-General 13

Balanced Scorecard 16

Baseline 38

Best practice evaluations 42

Bias 25

Consultation 27

Consultative evaluations 44

Courtesy 27

Criteria 27

Criterion 27

Department of Provincial and Local Government 12

Department of Public Service and Administration 12

Derived system 9

Design of an evaluation 41

Development orientation 24

Developmental state 6

Dimension 15, 38, 39, 45, 46

Economy 23

Effectiveness 24

Efficiency 23

Empowerment 28

Equitably 25

Ethics perspective 19

Evaluation 2, 41

Evaluation perspective 15, 45, 46

Evaluation question 40, 47

Factors determining programme success 31

Fairly 25

Financial perspective 17

Financial statements 42

Framing an evaluation 40

Good Human Resource Management and

career development practices 26

Governance perspective 17

Government-wide M&E System 9

Human Resource Management (HRM) perspective 18

Human Rights Commission 14

Impact evaluation 30, 43

Impacts 33

Impartially 25

Indicator 22, 27, 38

Information 27

Input 33, 38

Logic model 32, 40, 46

Management accounting 17, 42

Management decision-making 3

Monitoring 1, 38, 41, 46

Monitoring frameworks 37

National Treasury 12

Objective 38

Openness and Transparency 27

Organisational reviews 43

Organizational learning 3

Outcome 33, 39

Outcome chains 34

Output 33, 39

Participation in policy making 25

Performance 15

Performance indicator 38

Planning process 8

Policy process 6

Presidency 11

Professional ethics 23

Programme 30

Programme design 31, 40

Page 78: Public Policy Evaluation

70

Programme evaluation 30, 43, 46

Programme performance perspective 16

Public Service Commission 13

Redress 27

Relevance 28

Representative 26

Responsiveness 25

Results-Based Management 34

Scale of engagement 29

Service delivery model 31, 43

Service delivery performance 30

Service Standards 27

South African Management Development Institute 13

Standard 21, 27, 22

Statistics South Africa (Stats SA) 12

Subject of an evaluation 15

Sustainability 28

Target 38

Targeting 29

Theory-based evaluation 34

Transparency 5, 26

Types of evaluations 41

Value for Money 27

Value for Money evaluations 42

Values 21, 46

70

Page 79: Public Policy Evaluation
Page 80: Public Policy Evaluation