usaf monitoring and evaluation part 1

24
GLOBAL BROADBAND AND INNOVATIONS PROGRAM USAF CAPACITY BUILDING MODULE: MONITORING AND EVALUATION PART 1 JUNE 2013 June 2013 This publication was produced for review by the United States Agency for International Development. It was prepared by Integra Government Services International, LLC.

Upload: integra-llc

Post on 28-Mar-2016

219 views

Category:

Documents


1 download

DESCRIPTION

This is Capacity Building Module #3 of the USAID/GBI program to support enhancement ofUniversal Service and Access Funds (USAFs). Part 1 of this module will present the background, definitions and examples concerning Monitoring and Evaluation in relation to Universal Service Funds. Part 2 (forthcoming) will provide a more practical application of the topic.

TRANSCRIPT

Page 1: USAF Monitoring and Evaluation Part 1

GLOBAL BROADBAND AND INNOVATIONS PROGRAM USAF CAPACITY BUILDING MODULE: MONITORING AND EVALUATION PART 1

JUNE 2013

June 2013 This publication was produced for review by the United States Agency for International Development. It was prepared by Integra Government Services International, LLC.

Page 2: USAF Monitoring and Evaluation Part 1
Page 3: USAF Monitoring and Evaluation Part 1

DISCLAIMER The authors’ views expressed in this publication do not necessarily reflect the views of the United States Agency for International Development or the United States Government.

GLOBAL BROADBAND AND INNOVATIONS PROGRAM USAF CAPACITY BUILDING MODULE: MONITORING AND EVALUATION PART 1

JUNE 2013

Page 4: USAF Monitoring and Evaluation Part 1
Page 5: USAF Monitoring and Evaluation Part 1

USAF Monitoring and Evaluation Part 1 1

CONTENTS

1.   Introduction 2  1.1   Module Objectives, Contents 2  1.2   Essential M&E Concepts 3

2.   Monitoring 5  

2.1   What to Monitor? 5  2.2   Who Should Monitor? 6  2.3   Monitoring Mechanisms 6  2.4   Institutional Arrangements for Monitoring 6  2.5   Monitoring Framework 6  

3.   Evaluation 8  

3.1   Types of Evaluations 8  3.2   Impact Evaluations 9  

Annex 1: Template for Terms of Reference for Evaluation Firm 15   Annex 2: Sample Checklist of “Information requirements for Impact Assessment of Community Computer/Information Centers” 17   Annex 3: List of Core ICT Indicators 19  

Page 6: USAF Monitoring and Evaluation Part 1

Global Broadband and Innovations Program

2 JUNE 2013

1. Introduction This is Capacity Building Module # 3 of the USAID/GBI program to support enhancement of Universal Service and Access Funds (USAFs) as a resource to promote ICT development. This module addresses USAF Monitoring and Evaluation, and will be presented in two separate documents. The current document, Part 1, will present the background, definitions and examples concerning Monitoring and Evaluation in relation to Universal Service Funds. Part 2 (forthcoming) will provide a more practical application of the topic. Other modules in this series address the following topics: Module #1: USAF Strategic Planning Module #2: USAF Program Concepts Module #4: USAF Data Collection and Market Analysis Module #5: National Broadband Strategy Planning Collectively, these modules offer a set of useful information resources and practical tools, based upon international experience and best practices, in the management of Universal Service and Access Funds. Combined with other capacity building resources, including direct technical assistance from GBI and others, these modules can help USAF administrations and staff to enhance Fund operations, and improve the effectiveness of ICT development financing on many levels.

1.1 Module Objectives, Contents

The main objective of this module is to provide USAF administrators and staff with information, advice, experience, and recommendations regarding the role of Monitoring and Evaluation (M&E) as a key component of the Fund’s operations. It highlights the importance of M&E in a project procurement cycle, and describes the high level contours of recommended M&E functions for USAFs. Part 2 of this module (forthcoming) will provide additional practical support to Universal Service Funds in relation to the establishment of an M&E function, whether in-house or through outsourcing. Monitoring and Evaluation holds a pivotal position in efficient and effective project management for a Universal Service Fund. M&E can help the organization elicit required information from the activities being implemented. The information can be utilized for informed decision-making in addition to informing the project sponsors about various dimensions of project implementation such as efficiency, effectiveness, relevance, sustainability, etc. Without effective monitoring and evaluation there is a heightened risk that projects may fail and even worse, that failed projects might be replicated. In addition, Annex 1 provides a sample set of Terms of Reference for impact evaluation, for use in cases where M&E is to be outsourced. Annex 2 provides information specifically required for Evaluation of Community Computer and Information Centers. Annex 3 provides an indicative list of Core ICT Indicators.

Page 7: USAF Monitoring and Evaluation Part 1

USAF Monitoring and Evaluation Part 1 3

Focus  of  Evaluation  

Focus  of  Monitoring

1.2 Essential M&E Concepts Planning is the process of setting goals, developing strategies, outlining the implementation arrangements and allocating resources to achieve those goals. A Strategic Plan is a higher level document which contains a description of organizational Vision, Mission, Goals, Objectives and Strategies. This provides a general framework and sets organizational direction. Result Based Management (RBM) is defined as “a broad management strategy aimed at achieving performance and demonstrable results within the organizational goals and objectives”. An effective RBM system relies on constant feedback, learning and improving. Existing plans are regularly updated based on the lessons learned through monitoring and evaluation and future plans are developed based on these lessons. Result Based Management is focused on the result chain i.e.: Inputs result in Outputs, and in turn Outputs yield short term and long term Impact. This relationship is illustrated in the figure 1.

Monitoring is defined as the ongoing process by which stakeholders obtain regular feedback on the progress being made towards achieving their goals and objectives. Effective monitoring should be able to respond to two questions: “Are we taking the actions we said we would take?” and “Are we making progress on achieving the results that we said we wanted to achieve?” Monitoring can be done at various levels, including inputs, activity, output, project, program, policy, and organization. The scope of monitoring will be dependent on the level at which it is done, e.g., for an organization such as a USAF, monitoring of project implementation is of importance, whereas for a Ministry/Regulator, progress towards policy level objectives needs to be monitored. Evaluation is a thorough and independent assessment of either completed or ongoing activities to determine the extent to which they are achieving stated objectives. Evaluation, like monitoring, can apply to many things, including outcome, impact, project, program, strategy, policy, and organization. Performance Indicator is a metric for measurement of inputs, processes, outputs, outcomes, and impacts for development projects, programs, or strategies. A sample list of core indicators,

Figure 1: Logical Order from Goals to Impact

Page 8: USAF Monitoring and Evaluation Part 1

Global Broadband and Innovations Program

4 JUNE 2013

which can be used as reference points for ICT projects, is given at Annex 3 (Source: List of Core Indicators, ITU). Inputs are the main resources required to undertake the activities and to produce the outputs. These include personnel, civil works, equipment, materials, training, operational funds, etc. Outputs are the physical and/or tangible goods and/or services delivered by the project, which describe the scope of the project. These may include kilometers of cable laid, number of villages connected to broadband, number of Telecenters established, etc. Outcome is the key anchor of the project design. It describes what the project intends to accomplish by the end of project implementation, and by doing so makes it clear what development issues the project will address. Examples of this may include number of mobile users per 100 persons, number of persons/households with broadband access per 100 etc., as well as utilization metrics. Impact also termed goal or long-term objective, refers to policy objectives or in certain cases international development objectives such as the Millennium Development Goals (MDGs). The impact is wide in scope, will accrue at a date in the future — medium to long-term — following project completion, and is influenced by many factors other than the project itself, e.g., increase in the income levels, economic empowerment, enhanced literacy levels etc. A detailed account of what to cover in M&E and how to implement M&E in the realm of project management is provided in the following sections.

Page 9: USAF Monitoring and Evaluation Part 1

USAF Monitoring and Evaluation Part 1 5

2. Monitoring Monitoring is an ongoing process through which project implementation is monitored in relation to original plans and expectations. Monitoring is done through measuring relevant indicators on a regular basis. An Indicator can be defined as a unit or variable by which a measurement is made. The general purpose of monitoring is to capture information on project inputs, project activities, and project outputs periodically through a set of indicators. As a rule, consensus among key stakeholders needs to be developed on the type of indicators, periodicity and form of reporting, typically at the outset of the project, and formally incorporated within Terms of Reference.

2.1 What to Monitor?

Various dimensions of a project or program may need to be monitored. These can include scope, schedule, quality, cost, risk, contract, human resources, etc. A possible list of monitoring indicators which can provide meaningful insight at the appropriate level is given below:

ü Program/Organizational level o Funds spent in each region divided politically, geographically or on other

criteria o Number of contracts awarded o Subsidy amounts disbursed o Percentage of amounts released against each contract/project o Percentage of operational expenditures viz-a-viz contributions collected o Subscriber base of mobile telephones in the intervened area o Computer usage rate in intervened area o Volume of local ICT content developed o Mobile subscribers per 100 persons (segregated geographically) o Internet availability per 100 persons (segregated based on technology)

ü Project Level

o Project implementation progress o Financial tracking o Deliverables monitoring o Schedule monitoring o Quality of Service provided under project

The list above provides a non-exhaustive set of indicators, which can be measured with a frequency agreed by the key stakeholders. As a general convention, project level indicators are measured more frequently, ranging from daily to monthly depending on the nature of the project and usefulness of the collected information. The program level indicators are measured less frequently as compared to the project level indicators.

Page 10: USAF Monitoring and Evaluation Part 1

Global Broadband and Innovations Program

6 JUNE 2013

2.2 Who Should Monitor?

Ideally a USAF should have in-house capacity for conducting monitoring on indicators which are to be measured frequently. As for monitoring overall program indicators, specialized firms can be hired, or macro level secondary information may be used, e.g., data from ITU, UNDP, and State’s Bureau of Statistics and other credible sources.

2.3 Monitoring Mechanisms

Monitoring tools and required metrics should be defined clearly during each project’s planning stage. Examples of monitoring tools are listed below for illustration:

• Periodic reports: information is collected and reported with an agreed frequency • Random field verifications • Computer based project management tools, such as Microsoft Project, Primavera etc. • For technical monitoring; network performance reports, e.g., network usage, billing

information, Quality of Service (QoS) reports, call success ratios, etc. • GIS Maps illustrating growth of the network over time • Technical Parameters validation through drive tests and other technical metrics

2.4 Institutional Arrangements for Monitoring

Monitoring should be conducted by an independent unit reporting directly to management, and should be separate from units directly involved in project implementation, to eliminate any possible conflict of interest. A monitoring department generally consists of a dedicated team leader having sufficient experience in projects monitoring in technical or social domains, and at least one dedicated resource having sufficient experience in designing and managing monitoring systems for similar projects/organizations. For the purpose of technical support, the number of staff required to assist the monitoring team is determined based on the geographical outreach of the organization and the overall workload. Furthermore for monitoring in general, information from finance, human resources, contracts, and other USAF departments should flow toward the Monitoring Section of the organization as needed. It may not be possible to allocate this scope of resources within a USAF toward the monitoring function initially. Some tasks may thus be outsourced, or otherwise minimized. As monitoring capabilities and experience grow within the organization, the required allocation of personnel and responsibilities of staff should become more consistent and predictable.

2.5 Monitoring Framework

A structured Monitoring framework should address the following questions:

• What is to be measured, what indicators?

Page 11: USAF Monitoring and Evaluation Part 1

USAF Monitoring and Evaluation Part 1 7

• How it will be measured and recorded? • Who will measure that information? • How frequently will the information be measured and reported?

One underlying factor that can make monitoring more effective is to build consensus on all of the above parameters among all the key stakeholders, including planners, implementers, management, and beneficiaries. This should be incorporated within standardized processes that are applied to all projects at the planning stage, and within project Terms of Reference. Any deviation from the standard agreements should be negotiated at the time of contract award, and based upon reasonable concerns.

Page 12: USAF Monitoring and Evaluation Part 1

Global Broadband and Innovations Program

8 JUNE 2013

3. Evaluation ”Evaluation,” in the context of an M&E framework, represents the periodic, objective assessment of a planned, ongoing, or completed project, program, or policy. Evaluation is used to respond to specific questions related to design, implementation, and results. In contrast to continuous monitoring, evaluation studies are carried out at discrete points in time and often seek an outside perspective from experts to add credibility to the findings.

3.1 Types of Evaluations

Evaluation can address two broad types of queries:

ü Descriptive queries: The Evaluation seeks to determine what is taking place and describes processes, conditions, organizational relationships, and stakeholder views.

ü Cause-and-effect relationships. The Evaluation examines outcomes and tries to

assess what difference the intervention has made in the outcome and how much of the impact is attributable to the Project.

Evaluations can also generally cover two dimensions:

• Program/Project Performance Evaluation addresses such issues as relevance of the project to the overall goal of the organization, efficiency in delivery of the desired outputs, effectiveness of approach, timeliness of interventions, and other questions pertinent to the project/program design, management and operational decision making.

• Outcome/Impact Evaluation investigates the nature of the relationship between planned

inputs and the outcomes and impacts that result from the project. Program and project performance evaluations are linked to monitoring responsibilities, as ongoing project monitoring can provide a foundation for evaluation of programs’ efficiency and proper execution. A more thorough evaluation of this nature is typically conducted via an audit of the organization’s activities, spending, and implementation of mandates and plans, with comparison of performance against original expected results. Thus, such an audit may evaluate the number of locations in which ICT services or facilities have been established under a Fund program, and whether such services are being actively utilized as anticipated, in comparison with the projections made at the launch of the project. Such audits are typically required in connection with project subsidy payments, and under-performance may result in withholding of payments or other sanctions against contractors. Other aspects of such performance evaluations may include review of project costs and efficiency relative to original budget plans. The main focus of the rest of this report will address the second category above, outcome/impact evaluations. These involve more complex methods to design and implement, and their findings are ultimately of most substantial importance to a USAF’s mission.

Page 13: USAF Monitoring and Evaluation Part 1

USAF Monitoring and Evaluation Part 1 9

3.2 Impact Evaluations

Impact Evaluations are a particular type of evaluation that seek to determine the nature of causal relationships among certain variables involved with a project. Impact evaluations are structured around one particular type of question, i.e.: What is the impact (causality) of a program/project on an outcome of interest? An impact evaluation looks for the changes in outcomes that are directly attributable (attribution) to the program. This focus on causality and attribution is the cornerstone of impact evaluations and determines the methodologies that can be utilized for this purpose. To be able to estimate the causal effect or impact of a program, any method chosen must estimate the situation that would have existed for program participants if they had not participated in the program. A typical impact evaluation analysis, for a USAF that is involved in extending ICT services in un-served areas, might address the relationship between the level of poverty and technology/broadband diffusion, for example. Figure 2 illustrates the possible impacts of ICTs.

3.2.1 When is Impact Evaluation Required?

Impact evaluation is required when a USAF or related organizations/sponsors need to make decisions or obtain information on one or more of the following:

ü To what extent and under what circumstances could a successful pilot or small-scale program be replicated on a larger scale or with different population groups?

Figure 2: Possible Impacts of ICTs

Page 14: USAF Monitoring and Evaluation Part 1

Global Broadband and Innovations Program

10 JUNE 2013

ü What has been the contribution of the intervention to achieving the overall goals of

the USAF?

ü What are the potential development contributions of an innovative new program? Impact Evaluation may be justified when decisions have to be made about the continuation, expansion, or replication of a program and when the benefits of the evaluation (for example, money saved by making a correct decision or avoiding an incorrect one) exceed the costs of conducting the evaluation. An expensive Impact Evaluation that produces important improvements in program performance can be highly cost-effective; even minor improvements in a major program may result in significant monetary savings to the organization, as well as benefits to the public. Ideally impact evaluation should be performed a minimum of two times during the life cycle of a program/project: ex-ante before the implementation of the program, which is also known as a baseline evaluation; and ex-post after implementation of most or all projects under the program, when sufficient time and experience have developed to allow meaningful evaluation. The question of ‘when’ should be embedded in the program’s overall plan. If resources permit and the implementation period is longer with sizeable financial commitment, then it is viable to have mid-term evaluation as well.

3.2.2 How to Conduct Impact Evaluations

There is no one-size-fits-all methodology for impact evaluation; however the best design depends on a number of factors:

ü what is being evaluated (for example, a small project, a large program, or a nationwide policy);

ü the purpose of the evaluation; ü budget, time, and data constraints; and ü the time horizon (medium- and long-term impacts; or initial estimates of potential future

impacts). Impact Evaluation designs can also be classified according to whether they are commissioned at the start of the project, during implementation, or when the project is already completed; and according to their level of methodological rigor. A general framework which can be adopted for evaluation and subsequent capturing of impact is illustrated in Figure 3 below.

Page 15: USAF Monitoring and Evaluation Part 1

USAF Monitoring and Evaluation Part 1 11

Figure 3: Evaluation Framework

Two approaches used for impact evaluation are given below.

Approach 1: Time series method, which is based on studying the same population, based on a uniform set of indicators checklist, at two points of time. This is also known as the before and after method. Approach 2: Cross-section method, studying two different cohorts at the same time, based on a uniform set of indicators, one cohort comprising of project beneficiaries and one where project activities are not implemented. This is also known as the with and without method.

The focus of both approaches is to isolate the role of the program’s intervention in the overall impact on the status of the target beneficiaries. The impacts are the result of multiple factors, which can be broadly divided in two categories: 1) impacts attributable to the project, and 2) impacts attributable to all other factors. This relationship is given in the equation below:

Impact = f (A, B), where A corresponds to project related interventions, and B corresponds to all other external factors.

Project  Start  T0 Project  Midpoint  TM Project  Closure  TC

Project  Ben

eficiary  Coh

ort

PWith

Project  N

on-­‐ben

eficiary  Coh

ort

P  with

out

Evaluation  E0Baseline

Evaluation  EMMid-­‐Term  Evaluation

Evaluation  ECPost  Evaluation

Impact=  EC-­‐E0

Impa

ct  =E  

with-­‐E

  with

out

Evaluatio

nE   w

ithou

t

Evaluatio

nE   w

ith

Areas  covered  in  Evaluation•Demographic•Socio-­‐Economic•Technology•Usage  pattern•Other  relevant  indicators

Approach  -­‐1

Approa

ch  -­‐2

Page 16: USAF Monitoring and Evaluation Part 1

Global Broadband and Innovations Program

12 JUNE 2013

Evaluations can be carried out using a variety of tools or methods. These include the following:

• Beneficiary Assessment • Surveys • Econometric Analysis- Regression • Case studies • Cost benefit Analysis • Cost Effectiveness analysis

3.2.3 Information Requirements and Data Sources

Impact Evaluation is based on information regarding the status of the beneficiaries of the development intervention. The information is collected from the field by using a variety of data collection procedures, such as surveys, opinion polls, interviews, focus group, observations, etc., as well as certain statistical or empirical observations. The prevailing question to ask at the outset is: “What are the essential information requirements for conducting this particular evaluation?” A sample checklist is given at Annex 2, which can be used for the impact assessment of Community Computer/Information Centers. For the purpose of illustration Box 1 indicates a set of research questions which can be addressed to capture the impact of Telecenters.

  Research  questions  • What   are   the   social,   economic,   and   cultural  

benefits  of  the  Telecenters?  • What   improvements   in   the   existing   services  

were  brought  about  by  the  establishment  of  Telecenters-­‐   social,   economic,   technological,  cultural  and  environmental?  

• How  many  community  users  benefit  from  the  improved  services?  

• How  were   specific   community   organizations  and   institutions   impacted   by   the  Telecenters?  

• How   were   benefits   distributed   across  individuals,   groups,   and  organizations   in   the  community?  

• Did   the   Telecenters   lead   to   more   local  development  initiatives?  

• What   types   of   Telecenters’   services   and   ICT  applications   were   most   successful   in  delivering   the   intended   development  impact?  

• What   are   the   critical   success   factors,   in   the  financial  sustainability  of  Telecenters?  

  Box 1: Research questions for illustration

Page 17: USAF Monitoring and Evaluation Part 1

USAF Monitoring and Evaluation Part 1 13

3.2.4 Institutional Arrangements for Conducting Evaluations

Ideally, a separate evaluation unit/department needs to be established within the USAF to be responsible for evaluation (this can also include the Monitoring functions). However, in most cases the actual conduct of the evaluation studies will be undertaken by external firms, so the Evaluation unit will be mainly responsible for engaging and overseeing such outside experts. In this respect, the key responsibilities of the evaluation department are:

ü develop Scope of Work/Terms of Reference for conducting Evaluation studies; ü solicit bids from and enter contracts with qualified evaluation organizations/firms; ü facilitate and monitor implementation of the contract; ü review findings, assess validity and meaning of study results; and ü report results to the higher management.

The line of authority and reporting for the Evaluation department should be linked directly with top USAF management, to remove any chance of conflict of interest, particularly with groups involved in project design and award, for example. The evaluation team should typically be headed by a senior level evaluation expert having sufficient experience of conducting evaluations of the relevant projects, or similar types of research and analytical studies. The team leader should be supported by one to three mid-level professionals, depending on the required level of effort, and the frequency and scope of evaluations.

Page 18: USAF Monitoring and Evaluation Part 1

Global Broadband and Innovations Program

14 JUNE 2013

Page 19: USAF Monitoring and Evaluation Part 1

USAF Monitoring and Evaluation Part 1 15

Annex 1: Template for Terms of Reference for Evaluation Firm

1- BACKGROUND AND CONTEXT The background section makes clear what is being evaluated and identifies the critical social, economic, political, geographic and demographic factors within which the agency operates, which have a direct bearing on the evaluation. This description should be focused and concise (a maximum of one page) highlighting only those issues most pertinent to the evaluation.

2- EVALUATION PURPOSE This section explains clearly why the evaluation is being conducted, who will use or act on the evaluation results, and what will be done based on the results. A clear statement of purpose provides the foundation for a well designed evaluation.

3- EVALUATION SCOPE AND OBJECTIVES This section defines the parameters and focus of the evaluation. The section answers the following questions:

ü What aspects of the intervention are to be covered by the evaluation? This can include the time frame, implementation phase, geographic area, and target groups to be considered, and as applicable, which projects (outputs) are to be included.

ü What are the primary issues of concern to users that the evaluation needs to address or the objectives the evaluation must achieve?

4- EVALUATION QUESTIONS Evaluation questions define the information that the evaluation will generate. This section proposes the questions that, when answered, will give intended users of the evaluation the information they seek in order to make decisions, take action or add to knowledge. While the agency should initially define the questions it seeks to answer, responding firms may be encouraged to propose additional questions as well, based on their experience and insight with similar evaluations.

5- METHODOLOGY The ToR may suggest an overall approach and method for conducting the evaluation, as well as data sources and tools that will likely yield the most reliable and valid answers to the evaluation questions within the limits of resources. However, responding firms should also be asked to propose a methodology that is consistent with their past experience and best practices.

6- EVALUATION PRODUCTS (DELIVERABLES) This section describes the key evaluation products the evaluation team will be accountable for producing. These products may include:

Page 20: USAF Monitoring and Evaluation Part 1

Global Broadband and Innovations Program

16 JUNE 2013

ü Inception report—An inception report should be prepared by the evaluators before going into the full fledged data collection exercise. It should detail the evaluators’ understanding of what is being evaluated and why, showing how each evaluation question will be answered by way of: proposed methods, proposed sources of data and data collection procedures. The inception report should include a proposed schedule of tasks, activities and deliverables, designating a team member with the lead responsibility for each task or product. The inception report provides the USAF and the evaluators with an opportunity to verify that they share the same understanding about the evaluation and clarify any misunderstanding at the outset.

ü Draft evaluation report—The sponsors and key stakeholders in the evaluation should review the draft evaluation report to ensure that the evaluation meets the required quality criteria.

ü Final evaluation report—Finalized version of the report after incorporation of all the

comments and suggestions.

7- EVALUATION TEAM COMPOSITION & COMPETENCIES This section details the specific skills, competencies and characteristics needed in the evaluation team and the expected structure and composition of the evaluation team, including roles and responsibilities of team members. Generally speaking a multidisciplinary evaluation team—depending on the nature of evaluation-- is required for comprehensive evaluation

8- IMPLEMENTATION ARRANGEMENTS This section describes the organization and management structure for the evaluation and defines the roles, key responsibilities and lines of authority of all parties involved in the evaluation process. Implementation arrangements are intended to clarify expectations, eliminate ambiguities, and facilitate an efficient and effective evaluation process.

9- TIME FRAME FOR THE EVALUATION PROCESS This section lists and describes all tasks and deliverables for which evaluators or the evaluation team will be responsible and accountable along with respective timelines e.g. desk review, interviews, data collection, inception report, and other key milestones.

10- COST This section should indicate resources available for the evaluation (consultant fees, travel, subsistence allowance, etc.). Exact budgets may be determined by competitive bidding.

Page 21: USAF Monitoring and Evaluation Part 1

USAF Monitoring and Evaluation Part 1 17

Annex 2: Sample Checklist of “Information requirements for Impact Assessment of Community Computer/Information Centers”

General information about area Public and private sector facilities mapping (schools, hospital post office, and other facilities) Demographic information on the area (no. of persons in age groups, gender segregated) Numbers of persons who can access computers (male, female) Information on cost per use of computer, PCO, and other services Information on persons using computers for educational, health, or other purposes Information on persons using computers for web based earning Distance and time from the nearest computer center Information on other uses of computers and ICT Number of computers per 100 population Number of persons who can access Internet Number of persons who know how to use computers Information on purpose for using computers Information on Internet and computer use for learning Information on the status of e-commerce Information on the status of e-health Information on the status of e-governance Information on the status of e-agriculture Information on use of computers for Social Networking Information on economic activities associated with Internet and computer use Information on the status of networking Information on presence of local ICT content and applications Information on the financial sustainability of Center Information on community participation in management of Center Note: The list given above is only for illustration however the true information requirements can only be determined after studying the objective of the evaluation and the context in which it is being conducted. In general, more information is better, as long as it is reliable and not too expensive to obtain.

Page 22: USAF Monitoring and Evaluation Part 1

Global Broadband and Innovations Program

18 JUNE 2013

Page 23: USAF Monitoring and Evaluation Part 1

USAF Monitoring and Evaluation Part 1 19

Annex 3: List of Core ICT Indicators Core indicators on ICT infrastructure and access

A1 Fixed telephone lines per 100 inhabitants A2 Mobile cellular subscribers per 100 inhabitants A3 Computers per 100 inhabitants A4 Internet subscribers per 100 inhabitants A5 Broadband Internet subscribers per 100 inhabitants A6 International Internet bandwidth per inhabitant A7 Percentage of population covered by mobile cellular telephony A8 Internet access tariffs (20 hours per month), in US$, and as a percentage of per

capita income A9 Mobile cellular tariffs (100 minutes of use per month), in US$, and as a percentage

of per capita income A10 Percentage of localities with public Internet access centers (PIACs) by number of

inhabitants (rural/urban) Core indicators on access to, and use of, ICT by households and individuals HH1 Proportion of households with a radio HH2 Proportion of households with a TV HH3 Proportion of households with a fixed line telephone HH4 Proportion of households with a mobile cellular telephone HH5 Proportion of households with a computer HH6 Proportion of individuals who used a computer (from any location) in the last 12

months HH7 Proportion of households with Internet access at home HH8 Proportion of individuals who used the Internet (from any location) in the last 12

months HH9 Location of individual use of the Internet in the last 12 months HH10 Internet activities undertaken by individuals in the last 12 months Core indicators on use of ICT by businesses B1 Proportion of businesses using computers B2 Proportion of employees using computers B3 Proportion of businesses using the Internet B4 Proportion of employees using the Internet B5 Proportion of businesses with a Web presence B6 Proportion of businesses with an intranet B7 Proportion of businesses receiving orders over the Internet B8 Proportion of businesses placing orders over the Internet

Core indicators on the ICT sector and trade in ICT goods ICT1 Proportion of total business sector workforce involved in the ICT sector

Page 24: USAF Monitoring and Evaluation Part 1

Global Broadband and Innovations Program

20 JUNE 2013

ICT2 Value added in the ICT sector (as a percentage of total business sector value added)

ICT3 ICT goods imports as a percentage of total imports ICT4 ICT goods exports as a percentage of total exports Source: Core ICT Indicators 2010, ITU