advancing analysis in the education sector of ethiopia

83
Federal Democratic Republic of Ethiopia Ministry of Education May 2020 ADVANCING ANALYSIS IN THE EDUCATION SECTOR OF ETHIOPIA- JOINT REPORT

Upload: others

Post on 21-Feb-2022

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Advancing analysis in the education Sector of Ethiopia

Federal Democratic Republic of Ethiopia Ministry of Education

May 2020

ADVANCING ANALYSIS IN THE EDUCATION SECTOR OF ETHIOPIA-JOINT REPORT

Page 2: Advancing analysis in the education Sector of Ethiopia

I

Table of contents Executive summary ................................................................................................................................. 6

Conclusion and recommendations .......................................................................................................... 9

I. Introduction......................................................................................................................................... 1

II. Main Data Sources .............................................................................................................................. 3

1. Education Management Information System (EMIS) ..................................................................... 3

2. General Education Inspection Directorate (GEID) .......................................................................... 4

3. National Educational Assessment and Examination Agency (NEAEA) ......................................... 5

III. Main problem: data sub-utilization ..................................................................................................... 6

Reason 1: limited focus on improving analysis and mostly on data collection ........................................ 6

Reason 2: The data is not ready for analysis. .......................................................................................... 9

Reason 3: Little feedback to lower levels of administration .................................................................. 10

Reason 4: Analysis is usually aaggregated at the federal and regional levels........................................ 12

IV. Solutions to boost data analysis ........................................................................................................ 15

What is the data platform? .................................................................................................................... 15

V. Joint Analysis .................................................................................................................................... 18

1. Data Description and Integration .................................................................................................. 18

A. EMIS Data ................................................................................................................................ 20

B. Inspection Data ......................................................................................................................... 21

C. Learning Outcomes Data ........................................................................................................... 22

D. Spatial Data ............................................................................................................................... 24

2. Key Indicators ............................................................................................................................... 16

A. EMIS Indicators ........................................................................................................................ 16

B. Inspection Indicators ................................................................................................................. 31

C. Learning Indicators ................................................................................................................... 33

3. Exploring Relationships between EMIS, Inspection and learning ................................................. 35

Page 3: Advancing analysis in the education Sector of Ethiopia

II

A. Distribution of Schools by Inspection Level ............................................................................. 35

B. Relationship between EMIS indicators and Performance .......................................................... 35

C. Relationship between Performance and Learning ..................................................................... 39

D. Joint Analysis with Additional Datasets .................................................................................... 43

4. Islands of opportunities: positive outliers ...................................................................................... 50

5. Tools for analysis .......................................................................................................................... 51

Woreda, Zone and Region Report Cards ........................................................................................... 53

Dashboard to compare key EMIS, Inspection and Learning results by regions and woredas ............ 58

Dashboard to identify bottom and top performers schools, woredas and zones in key EMIS, Inspection

and Learning results by regions ........................................................................................................ 59

Dashboard to relate EMIS, Inspection and Learning results ............................................................. 60

VI. Conclusion and recommendations..................................................................................................... 60

VII. ANNEX ............................................................................................................................................ 61

Information about inspection indices and standards .............................................................................. 61

Additional description of key variables ................................................................................................. 62

Annex YY ............................................................................................................................................. 68

Annex XX ............................................................................................................................................. 69

Tables Index

Table 1: Five key functions of the data platform along with examples ........................................ 16

Table 2: Number of schools that have each type of data in the final data set ............................... 19

Table 3: Number of schools with Inspection and Learning data in the final data set ................... 20

Table 4: Number of schools with inspection data in the final data set by round of inspection .... 21

Table 5: Schools’ most recent inspection result ........................................................................... 21

Table 6: Number of schools with learning data in the final data set ............................................. 23

Table 7: Number of schools with spatial information in the final data set ................................... 24

Table 8: Availability of school GPS coordinates in the final data set .......................................... 25

Table 9: School grant allocation matrix ........................................................................................ 44

Page 4: Advancing analysis in the education Sector of Ethiopia

III

Table 10 Top performing schools in bottom performing zones ................................................... 51

Table 11 Inspection indices and standards.................................................................................... 61

Figures Index

Figure 1: Five key functions of the data platform 7

Figure 2: Grade 4 Survival Rate versus Performance with tread line 8

Figure 3: Grade 10 exam results versus Performance with trend line 8

Figure 1: Data collection and processing flow in the education sector of Ethiopia 4

Figure 2: Linear information-impact cycle 7

Figure 3: Circular information-impact cycle 8

Figure 4: Ideal return of information flow from the federal level 11

Figure 5: Data flow and information flow from all levels of administration 12

Figure 6: Gender Parity Index in Grades 1-4 by Region, 2011 E.C. 13

Figure 7: Gender Parity Index in Grades 1-4 by Region and by Woreda, 2011 E.C. 13

Figure 8: Network of integrated education sector datasets 19

Figure 9: Number of students in the final data set 20

Figure 10: Distribution of schools by Region 21

Figure 11: Spatial distribution of schools colored by inspection level 22

Figure 12: Distribution of inspection level in the final data set 22

Figure 13: Woredas with sample-based learning results in the final data set 23

Figure 14: Woredas with national exams learning results in the final data set 24

Figure 15: Availability of school GPS coordinates in the final data set 25

Figure 16: Girls to Boys Ratio by Region and Woreda 17

Figure 17: Grade 2 to Grade 1 Ratio by Region and Woreda 18

Figure 18: Average Grade 4 Survival Rate by Region and Woreda 19

Figure 19: Number of students per teacher by region and by woreda 29

Figure 20: Average of schools’ PTR (students per teacher) by region and by woreda 29

Figure 21: Mathematics textbook pupil ratio by region and by woreda 30

Figure 22: English textbook pupil ratio by region and by woreda 31

Figure 23: Average performance score by Region and Woreda 32

Figure 24: Spatial distribution of average performance inspection result by Zone 32

Page 5: Advancing analysis in the education Sector of Ethiopia

IV

Figure 25: Average National Learning Assessment score by region and by school 33

Figure 26: Average Grade 10 exam by region and by school 34

Figure 27: Average Grade 12 exam by region and by school 34

Figure 28: Distribution of schools by inspection performance 35

Figure 29: Gender Parity Index by performance bins, 2011 E.C. 36

Figure 30: Pupil-Teacher Ratio by performance bins, 2011 E.C. 36

Figure 31: G2 to G1 Ratio by performance bins, 2011 EC 37

Figure 32: Grade 4 Survival Rate by performance bins, 2011 E.C. 37

Figure 33: Grade 4 Survival Rate versus Performance 38

Figure 34: Grade 4 Survival Rate versus Performance with tread line 39

Figure 35 Grade 10 exam results with and without inspection 41

Figure 36: Grade 10 exam score by performance bins, 2011 E.C. 41

Figure 37 Grade 10 exam results versus Performance 42

Figure 38: Grade 10 exam results versus Performance with trend line 43

Figure 39: Average school grant per student, 2011 E.C. 45

Figure 40: Average school grant per student by performance bins, 2011 E.C. 45

Figure 41: Total school grant per school by performance bins (average), 2011 E.C. 46

Figure 42: Average school size by performance bins, 2011 E.C. 47

Figure 43: Woredas with Phase 1 and Read II schools 48

Figure 44: Number of schools and students in Phase 1 and Read II schools 48

Figure 45: Frequency of Performance Score in the final data set, colored by project 48

Figure 46 Average Input score by region and by woreda 62

Figure 47 Average Process score by region and by woreda 63

Figure 48 Average Output score by region and by woreda 63

Figure 49 Average School resources index score by region and by woreda 64

Figure 50 Average School management index score by region and by woreda 64

Figure 51 Average Students engagement index score by region and by woreda 65

Figure 52 Average Teacher effectiveness index score by region and by woreda 65

Figure 53 Average Intermediate outcome index (b) score by region and by woreda 66

Figure 54 Average Intermediate outcome index (b) score by region and by woreda 66

Figure 55 Average deliverology score by region and by school 67

Page 6: Advancing analysis in the education Sector of Ethiopia

V

Figure 56: Scatter plots between EMIS indicators & performance 68

Figure 57: Average Grade 10 examinations score versus school size 69

Figure 58: Average Grade 12 examinations score versus school size 69

Figure 59: Average student score in Mathematics (NLA) versus school size 70

Figure 60: Average student score in English (NLA) versus school size 70

Abbreviations and Acronyms

EMIS Education Management Information System

ESDP Education Sector Development Programme

GEID General Education Inspection Directorate

GEQIP-E Ethiopia General Education Quality Improvement Program for Equity

GoE Government of Ethiopia

MoE Ministry of Education

MoF Ministry of Finance

PRMD Planning and Resource Mobilization Directorate

REBs Regional Education Bureaus

NEAEA National Educational Assessment and Examination Agency

Page 7: Advancing analysis in the education Sector of Ethiopia

6

Executive summary

The education sector of Ethiopia collects a large

amount of data for two main purposes:

monitoring and evaluating the progress of the

sector and making education policies built on

evidence. There are several institutions that

contribute to the collection of education data in

the country, such as the Government Offices,

NGOs, Development Partners (DPs), civil

societies, and others. However, the main sources

of information that are critical for monitoring and

planning originate from the Education

Management Information System (EMIS), the

General Education Inspection Directorate

(GEID) and the National Educational Assessment

and Examination Agency (NEAEA) of the

Ministry of Education.

Despite its availability and the countless efforts

undertaken to collect it, the data could be more

widely utilized for Education Policy formulation

in Ethiopia. We have identified four technical

barriers that the sector needs to overcome in order

to enhance the utilization of the data at hand:

1. There could be more focus on increasing and improving analysis, not only on the collection of data. The data may not be perfectly accurate, but it is already good enough to guide policy. Although the current efforts to improve the data collection process are widely acknowledged and encouraged, there should be more emphasis on increasing data analysis.

2. All the data that the sector produces should be located at one place and integrated. Up

to now, the ministry does not have any system that collects, integrates and prepares all the available datasets produced by different Offices or Directorates: each institution has its own mechanisms to collect data, classify observations and process the information. Although this level of independence is valuable, it makes very problematic any attempt to answer key policy questions that require joint investigations. In the absence of such a system, there will always be little coordination between the relevant institutions that add data to the sector.

3. Lower levels of administration should receive more feedback on the data they submit. Providing useful feedback to schools, woredas, zones and regions, after all the effort they undertake to collect the data, is not only fair but perhaps it is the best approach to improve the system. Bringing data into action at all levels can be one of the most effective ways to help parents, teachers, principals, education experts, and all those involved in the education sector to advance learning.

4. Related to the previous reason, most of the times analysis is aggregated at the national and regional levels, ignoring the diversity and variations in performance at the lower levels. It is common to see reports that average the results at the regional level, ignoring the great level of variation inside each region. A slight change in the level of analysis has the potential to reveal more precise and actionable conclusions. Especially if the aim is to make analysis more useful for lower levels of administration, it is important to show disaggregated analysis that does not ignore the variances in contexts and that can be easily translated into action.

In this report, these problems are explored in

detail, asking questions like why they arise, how

Page 8: Advancing analysis in the education Sector of Ethiopia

7

they can be fixed, and what actions the Ministry

of Education (MoE) has undertaken to solve

them. Based on the experience and the findings of

this report, the creation of the data platform

stands as one of the key solutions. The data

platform is a system that makes data ready and

available for analysis. By setting the guidelines

for collection, bringing all the data to the same

place, and providing comprehensive information,

this system will enhance the capacity to

understand the sector and of converting data into

impact (Figure 1).

Figure 1: Five key functions of the data platform

In order to provide examples of the use of

integrated data sets and the potential advantage of

having a data platform, a joint analysis, that

integrates EMIS, inspection, and learning

outcomes data, was prepared collaboratively by

EMIS, GEID and NEAEA. The analysis and the

tools presented in this part of the report focus on

trends and regional variations in student

outcomes including learning outcomes and

internal efficiency outcomes, determinants of

learning outcomes and gaps, lessons learned and

good practices and a set of recommendations.

The objective of the integrated analysis is

twofold. First, to demonstrate the potential of

combining the different sources of information,

and second, to answer key policy questions by

testing the accuracy of the data, identifying

indicators gaps, finding specific places for action,

and creating tools for analysis.

For the joint analysis we integrated three

categories of data sets: 2011 E.C EMIS data sets,

three sets of Inspection data, learning outcomes

data sets, and additional data sets. As far as we

are concerned, this is the first time that any study

analyzes these three data sets together. The

integrated final data set has a total of 37,777

schools from which 32,259 had at least one round

of inspection data, and 7,011 had at least one

record of learning data. From each data set we

selected key variables for analysis that cover all

areas (access, quality, efficiency, equity, and

learning). The selection of the indicators took into

account the targets set in the Educational Sector

Development Program V (ESDP V) and the

General Education Quality Improvement

Program for Equity (GEQIP-E), and the

availability of data.

One purpose for the integration of data sets is to

investigate the relationship and the

complementarity among the key indicators. For

that, we studied the relation between the

inspection level of schools and their results in

other key indicators. Our hypothesis is that

schools with higher levels of inspection should

have better results in EMIS and Learning

indicators. Therefore, this report investigates (i)

the relationship between inspection results and

EMIS, and (ii) the relationship between

inspection results and learning indicators.

1. System diagnosis

2. Monitor delivery

3. Evaluate national policy

4. Inform directorates

5. Drive research

Page 9: Advancing analysis in the education Sector of Ethiopia

8

We find that there is a lot of variation in the data.

Low performing schools in inspection have low

and high values in EMIS and learning results, and

similarly, high performing schools in inspection

have low and high values in EMIS and learning

results. When relating Inspection and EMIS

results, we found that the inspection performance

of a school, even if it could not explain much of

the variation in EMIS indicators, it could

significantly predict them. In other words,

inspection performance is a significant but

imprecise predictor of EMIS results (Figure 2).

Figure 2: Grade 4 Survival Rate versus Performance with tread line

On the contrary, when studying the relationship

between inspection and learning outcomes, we

find that inspection results were not correlated

and could not significantly predict learning

results (Figure 41). These two findings indicate

that the inspection process captures access,

quality, efficiency, and equity indicators better

than learning outcomes.

Figure 3: Grade 10 exam results versus Performance with trend line

Having an integrated data set allows a more

comprehensive understanding of the sector

because a single location can be analyzed from

different points of view. To support this intention,

we created tools that allow the user a

comprehensive understanding of the results in his

or her administration, and to identify where and

on what they need to place more attention.

Namely, we created one report card for each

school, woreda, zone, and region in the country,

and three interactive dashboards. The report cards

give an overview of all the key indicators’ results

in each specific location, highlighting dimensions

where each location is performing well and where

it is lagging. The three dashboards permit the

comparison of results among regions and

woredas, and the easy identification of bottom

and top performing schools, woredas, and zones

by region. These tools are just a couple of

examples of the various opportunities that the

integrated data set offers.

i. For each school, woreda, zone and region of the country, we created a report card. We have created one report card for each school (>37,000 school report cards), each woreda (>1000 woreda report cards), each zone (>100 zone report cards), and each region (11 region report cards). These reports show

Page 10: Advancing analysis in the education Sector of Ethiopia

9

basic information of each location, such as number of students and number of teachers enrolled, as well as their results in each of the key EMIS, Inspection, and learning indicators. The main objective of the report card is to provide leaders with a comprehensive overview of their performance in each of the key indicators, and help them to identify where their administration is lagging behind. For regions, zones and woredas, we added additional pages that explore more in detail the result of each indicator.

ii. Dashboard to compare key EMIS, Inspection and Learning results by regions and woredas. This dashboard allows the users to select the indicators they want to analyze and see the average result by region and the separation of the results of the woredas inside each region. The tool is useful to see whether the regions have woredas with extremely low performing scores where they can target their attention.

iii. Dashboard to identify bottom and top performers schools, woredas and zones in key EMIS, Inspection and Learning results by regions allows the user to choose a region and identify what are the schools, woredas, or zones that are performing the best and the worst. It allows the users to select any location in the country, choose the key indicators to analyze, and explore the top and bottom performers. The dashboard is flexible in terms of the type of location to display; school, woreda or zone; and the number of top and bottom items to be selected; from the bottom and top 5 to the bottom and top 100.

iv. Dashboard to relate EMIS, Inspection and Learning results. This dashboard allows the user to choose the indicators he or she wants to compare and investigate their correlation. One can choose to compare EMIS indicators vs. Inspection indicators, Inspection indicators vs. learning indicators, and EMIS indicators vs learning indicators.

Conclusion and recommendations

To continue advancing analysis in the education

sector of Ethiopia, based on the findings in this

report, the team suggest the following

recommendations:

Continue the creation of the data platform, a system that sets the guidelines for collection; that reunites and integrates all the data produced in the sector; and that provides useful analysis to all levels of education. This system should be accessible to all relevant actors in the sector (e.g. directors, planning experts, etc.)

To allow the integration of data sets, continue adopting EMIS school codes in all the data collected in the sector. There is an urgent need to include EMIS school codes in all the data produced by the NEAEA, especially, on grade 8, grade 10 and grade 12 national examinations. Moreover, this initiative should also include data sets that are not collected by Government institutions e.g. young lives study, school mapping, etc.

Improve the EMIS data entry software to avoid any sort of duplications of EMIS school codes IDs.

Encourage more analysis of the data at hand. Both inside the institutions that collect the data and outside them. Analysis should not only happen inside the directorates that are collecting the data sets. Analysis should also occur in other directorates (e.g. PRMD, TDP), other levels of administration (e.g. REBs, WEBs), and outside the Government institutions (e.g. universities, researchers).

Perform analysis of the data available to identify problems in the quality of data. EMIS enrollment data is of more quality because it is widely used. However, other indicators, like WASH facilities, are of low quality or incomplete. Stretching the need of these

Page 11: Advancing analysis in the education Sector of Ethiopia

10

variables for analysis and action will improve the quality of these indicators.

Perform more analysis of the data available to identify those indicators that are not useful for action. This will improve the data collection process by, for instance, improving the length of the questionnaires.

Provide useful analysis to lower levels of education. This includes disaggregated analysis that allows an easier identification of the main problems in a specific location, and that does not ignore variations (e.g. region averages ignore variations inside regions).

Expand the collection of learning outcomes at primary levels. The sector produces EMIS and inspection data for all schools in the country, but learning outcome data only for all secondary schools. For schools in primary, they are only sample based studies that are not implemented every year. It is important to expand the understanding of learning at lower levels of education.

The lack of correlation between inspection scores and learning outcomes suggest that the inspection process is not capturing well the quality of schools. Two potential hypothesis that can explain the lack of correlation are the subjectivity of some of the inspection standards, and/or the lack of independence of the inspection process (the inspection directorate is not an independent body). However, the reason why this is the case should be further investigated.

The report cards and the dashboards should be discussed, improved, and shared with relevant actors in the sector. Moreover, these are only few examples of the things that can be made; more tools that help transform data into action should be created.

Page 12: Advancing analysis in the education Sector of Ethiopia

1

I. Introduction

The education sector of Ethiopia collects a large amount of data for two main purposes: monitoring and

evaluating the progress of the sector, and making education policies built on evidences. There are several

institutions that contribute to the collection of education data in the country, such as the Government

Offices, NGOs, Development Partners (DPs), civil societies, and others. However, the main sources of

information that are critical for monitoring and planning originate from the Education Management

Information System (EMIS), the General Education Inspection Directorate (GEID) and the National

Educational Assessment and Examination Agency (NEAEA) of the Ministry of Education.

Despite its availability and the countless efforts undertaken to collect it, the data could be more widely

utilized. We have identified four technical barriers that the sector needs to overcome in order to enhance

the utilization of the data at hand. First, there could be more focus on increasing and improving analysis,

not only on the collection. Second, all the data that the sector produces should be located in one place and

integrated. Third, lower levels of administration should receive more feedback on the data they submit. And

four, which relates to the previous reason, most of the times analysis is aggregated at the national and

regional levels, ignoring the diversity and variations in performance at the lower levels.

In this report, we will explore these problems in detail, asking questions like why they arise, how they can

be fixed, and what actions the Ministry of Education (MoE) has undertaken to solve them. We will propose

the data platform as the key system to put in place to make data ready and available for analysis. By setting

the guidelines for collection, bringing all the data to the same place, and providing comprehensive

information, this system will enhance our capacity to understand the sector and of converting data into

impact. Finally, we will provide examples of the use of integrated data sets in the joint analysis section, the

core section of this report.

This joint analysis is prepared collaboratively by EMIS, GEID and NEAEA by analyzing the integrated

EMIS, inspection, and learning outcomes data. The analysis and the tools presented here focus on trends

and regional variations in student outcomes including learning outcomes and internal efficiency outcomes,

determinants of learning outcomes and gaps, lessons learned and good practices and a set of

recommendations.

The objective of the integrated analysis is twofold. First, to demonstrate the potential of combining the

different sources of information, and second, to answer key policy questions by testing the accuracy of the

data, identifying indicators gaps, finding specific places for action, and creating tools for analysis.

Page 13: Advancing analysis in the education Sector of Ethiopia

2

For the joint analysis we integrated three categories of data sets: 2011 E.C EMIS data sets, three sets of

Inspection data, learning outcomes data sets, and additional data sets. As far as we are concerned, this is

the first time that any study analyzes these three data sets together. The integrated final data set has a total

of 37,777 schools from which 32,259 had at least one round of inspection data, and 7,011 had at least one

record of learning data. From each data set we selected key variables for analysis that cover all areas (access,

quality, efficiency, equity, and learning). The selection of the indicators took into account the targets set in

the Educational Sector Development Program V (ESDP V) and the General Education Quality

Improvement Program for Equity (GEQIP-E), and the availability of data.

One purpose for the integration of data sets is to investigate the relationship and the complementarity among

the key indicators. For that, we studied the relation between the inspection level of schools and their results

in other key indicators. Our hypothesis is that higher level schools should have better results in EMIS and

Learning indicators. We, therefore, investigated (i) the relationship between inspection results and EMIS,

and (ii) the relationship between inspection results and learning indicators.

We find that there is a lot of variation in the data. Low performing schools in inspection have low and high

values in EMIS and learning results, and similarly, high performing schools in inspection have low and

high values in EMIS and learning results. When relating Inspection and EMIS results, we found that the

inspection performance of a school, even if it could not explain the much of the variation in EMIS

indicators, could significantly predict them. In other words, inspection performance is a significant but

imprecise predictor of EMIS results. On the contrary, when studying the relationship between inspection

and learning outcomes, we find that inspection results were not correlated and could not significantly

predict learning results. These two findings indicate that the inspection process captures access, quality,

efficiency, and equity indicators better than learning outcomes.

Having an integrated data set allows a more comprehensive understanding of the sector because a single

location can be analyzed from different points of view. To support this intention, we created tools that allow

the user a comprehensive understanding of the results in his or her administration, and to identify where

and on what they need to place more attention. Namely, we created one report card for each school, woreda,

zone, and region in the country, and three interactive dashboards. The report cards give an overview of all

the key indicators’ results in each specific location, highlighting dimensions where each location is

performing well and where it is lagging. The three dashboards permit the comparison of results among

regions and woredas, and the easy identification of bottom and top performing schools, woredas, and zones

by region. These tools are just a couple of examples of the various opportunities that the integrated data set

offers.

Page 14: Advancing analysis in the education Sector of Ethiopia

3

This report is divided into six sections. Followed by the introduction, Section II provides context on the

main bodies that contribute data to the sector. Section III discusses the main problems to overcome in order

to improve the utilization of data. Section IV presents the data platform as the main solution to such

problems. Section V, the core of the report, grants useful analysis that proves the value of having an

integrated data set. This section will provide a series of lessons about the collected data as well as a set of

tools that aim to help leaders and actors in the sector to transform data into impact. Section VI concludes.

II. Main Data Sources

1. Education Management Information System (EMIS)

The Education Management Information System (EMIS) is a structure inside the MoE which collects,

processes, and analyzes school administrative data annually from all the schools in the country. It has its

main office in the central government and branches in all Regional Education Bureaus (REBs).

A printed questionnaire, that can be up to twenty pages long, is sent to all schools and is answered by the

school principals. The questionnaire includes indicators focusing on access, quality, equity and efficiency.

For example, in terms of access, the survey collects data on number of students, number of teachers and

staff, number of classrooms, school infrastructure, location, etc. Likewise, with regards to quality, it

inquires the level of education and the experience of teachers and staff, as well as the availability of

textbooks. For equity, it disaggregates the questions by gender, special needs, age, and type of school.

Finally, for efficiency, the questionnaire collects information about promotion, repetition, dropouts and

completion rates.

Once the questionnaires are filled at the school level, their hard copies are sent to Woreda Education Offices

(WEOs) to be verified. Then, WEOs send all the verified hard copies to zones, and zones send the same to

regions. Finally, the data is combined and checked at REBs, and submitted to the MoE for integration at

the national level.

EMIS data collection culminates in the production of the annual Education Statistics Annual Abstract

(ESAA), the main document summarizing the status of the education sector in Ethiopia and largely used in

the formulation of education policy. Not only is EMIS the oldest and most recurrent source of information

that the sector has, it is also the structure which collects and administers the largest amount of education

data in the country. For instance, the 2011 E.C. EMIS survey collected data on more than forty thousand

primary and secondary schools.

Page 15: Advancing analysis in the education Sector of Ethiopia

4

Figure 4: Data collection and processing flow in the education sector of Ethiopia

2. General Education Inspection Directorate (GEID)

The second main source of data on Ethiopia’s education sector is the MoE’s General Education Inspection

Directorate (GEID) which generates inspection and quality assurance data for all the schools. The

Inspection Directorate works as an independent external evaluation body that assess the quality and

effectiveness of education in schools in terms of input, process and output standards. These domains are

evaluated against 26 comprehensive standards during a school visit of two to three days. The standards

have different focus areas, such as school infrastructure, human and financial resources, participatory

school improvement planning, learning effectiveness, teaching effectiveness, parents and community

engagement, and other aspects of the overall school development.

Once inspected, schools are classified into 4 levels based on the overall performance (input, process &

output) score. These are: Level 1, if a school scores below 50%; Level 2, if a school scores between 50%-

69.99%; Level 3, if a school scores between 70%-89.99%; and Level 4, if a school scores between 90%-

100%. The MoE considers that schools meet and exceed the standards if they are on Level 3 and Level 4,

respectively. However, Level 1 and 2 schools are considered as schools that need to be upgraded.

The inspection process is a rigorous process that requires a large investment of time. The MoE and REBs

cover all the primary and secondary schools in the country during a span of three years. The GEID has

conducted two rounds of inspection so far. The first round of school inspection was conducted between

2006-2008 E.C. (2013/14-2015/16) covering a total of 34,126 schools across nine regions and two City

Administrations. According to the national school inspection guidelines if schools scored below the

minimum standard (Level 1 and Level 2), a re-inspection will be conducted after a year. Therefore, in 2009

(2016/17) and 2010 E.C. (2017/18), all schools categorized as Level 1 and level 2 in the first round were

re-inspected, leading to the re-inspection of 20,908 primary and 1,476 secondary schools. Finally, once the

re-inspection was completed, the second round of national inspection started in the same year 2010 E.C,

which is still in process. Up to date, GEID completed the inspection of 21,350 primary and 1,743 secondary

Page 16: Advancing analysis in the education Sector of Ethiopia

5

schools (60% of the total), and it is going to cover the remaining 40% of the schools before the end of 2012

E.C. (2019/20).

Despite the short period of time since its establishment, GEID has turned out to be an essential part of the

education sector. The inspection process has set the minimum standards that each school needs to achieve

in order to be considered as a good quality and good performance school. And, thanks to the wide set of

standards that the directorate evaluates, the school inspection data is now a powerful tool to identify the

aspects on which each school is lagging and where most of action is needed.

3. National Educational Assessment and Examination Agency (NEAEA)

The third source of information that is critical for monitoring and planning in the education sector is the

National Educational Assessment and Examination Agency (NEAEA). It is the government institution that

asses and monitors, arguably, the most important objective of the education system i.e. student learning.

The agency has two core directorates, the National Learning Assessment Directorate and the National

Examinations Directorate. The former focuses on the development and administration of sample-based

learning assessments in general education. The main studies that this directorate produces are the National

Learning Assessment (NLA), the Early Grade Reading Assessment (EGRA), and the Early Grade

Mathematics Assessment (EGMA), each expected to be conducted every two years. The NLA, regionally

representative assessment, evaluates the level of learning in Mathematics and English. The EGRA, a

language representative study, assess the level of reading comprehension of students in grade 2 and 3.

The second directorate of the NEAEA, the National Examinations Directorate, administers the final

regional exams for all students in grades 10 and 121. Apart from evaluating the student scores on different

subjects, and thus determining their level of learning, these examinations also determine who can progress

to the next level of education i.e. the higher education.

These three bodies of the MoE stand out for the importance of the data they collect and its historical

influence on educational monitoring and planning. However, the sector produces other data sets, that are

not recurrently collected or have not been largely used by the MoE, but have added or will add very

important value to the sector. Some examples are, the deliverology assessment, the young lives study,

education in emergencies data, school grant, teachers’ qualifications and salaries, the education sector

budget, school mapping, education sector development plan models, household surveys, and more.

1 Grade 8 has regional examinations that are managed by regions. The NEAEA provide technique assistance to some

emerging regions. However, data on student scores on grade 8 examinations do not reach the federal government.

Page 17: Advancing analysis in the education Sector of Ethiopia

6

In conclusion, a vast amount of data is produced by the education sector, coming especially from three main

directorates (although not exclusive to them). The key sources of information critical for the designing of

interventions, policy dialogue, educational planning and decision-making include the Education Statistics

Annual Abstract (ESAA) by EMIS, the school inspection report by GEID, and the Early Grade Reading

Assessment (EGRA), National Learning Assessment (NLA), and national examination results by NEAEA.

However, none of these three sources of information alone is enough to give a complete understanding of

the education sector.

In the next section, we identify some of the main challenges faced by the education sector in Ethiopia, even

though the data exists. Further, we propose a solution to these issues, data platform, which supports data

integration across multiple data sets.

III. Main problem: data sub-utilization

Despite the large amount of data that the sector produces, the analysis and use of it for policy decision

making in the education sector of Ethiopia could be enhanced. We identify four main interrelated reasons

that can explain why there is sub-utilization of the data at hand. The objective of this section is to provide

a wide understanding of the main challenges that need to be overcome in order to impulse more the use of

education data for evidence-based policy and decision making in Ethiopia.

Reason 1: limited focus on improving analysis and mostly on data collection

As explained in the introduction, the path that data follows from collection to impact is very complex: it

requires a lot of capacity and coordination and it is largely susceptible to data errors. The number of schools,

the length of the questionnaires, the number of people involved in the collection, the lack of infrastructure

and technology, the capacity to code, and the smoothness of the communication between offices, are a few

examples of the numerous challenges that question the reliability of the data collected. Because of this, it

is common for higher officials to doubt the conclusions taken from the data collected and, therefore, to

underestimate the usefulness of doing more analysis for decision making. Wrongly, there seems to be no

point on investing on extra analysis as long as the data at hand is not of good quality, and, therefore, most

of the attention focus on improving the data collection process.

This order of ideas suggests that before using data for policy decision making, the sector needs to improve

the quality of the data, and that this is only possible if the data collection process is enhanced. Figure 2

illustrates this intuition by showing the information impact cycle, from collection to impact, as a series of

linear steps. Namely, data collection, analysis, informed decisions and impact. The illustration suggests that

better data collection would lead to better analysis, which at the same time would lead to better decision

Page 18: Advancing analysis in the education Sector of Ethiopia

7

making. And that each step can only be improved if the step before is improved. This rational explains why

most of the investments to improve the data system are frequently focused on improving the first step of

this linear cycle: data collection.

Figure 5: Linear information-impact cycle

The problem is that the understanding of quality of data is usually underestimated. A conventional

understanding of quality of data is accuracy, which translates into how close a measured value is from its

real value. Then, data is considered of good quality if it provides an accurate measure of whatever it is

measuring. For example, under this conception, enrolment data would be of good quality if it tells us the

true number of students enrolled per year in each school. Although this is an intuitive and valid

interpretation of what quality of data is, the definition of good quality goes beyond the boundaries of the

accuracy definition. Instead, quality refers to the level of usefulness of the data collected: the measurement

of a certain variable is considered as of good quality only if it is useful. In such scenario, enrollment data

would be of good quality if it is useful for action, for example, for the right allocation of school grants.

First, notice that the accuracy definition is contained on the usefulness definition. Certainly, if the data

collected is not accurate, the data will not be useful at all (just the opposite), meaning that none accurate

measurements are also of bad quality. However, the expansion of the definition suggests that even if the

data collected is accurate, it needs to be useful for policy decision making in order to be considered as good

quality.

Indeed, improving the data collection process will enhance the quality of the data collected. However, more

analysis also has the potential to improve quality of data collection by assessing the level of usefulness of

the data collected. Instead of a linear process, Figure 3 presents the information impact cycle as a circular

process that reinforces itself. Better data collection continues leading to better analysis, which will lead to

better decision making. However, and more importantly, the figure suggests that both analysis and informed

decisions have the potential to improve data collection.

Data Collection

AnalysisInformed Decision Making

Impact

Page 19: Advancing analysis in the education Sector of Ethiopia

8

Contrary to the linear cycle, the circular cycle suggest

that the quality of data will improve with more

analysis. This will happen in three ways: (i)

identifying where accuracy is more complex to

collect, (ii) testing what is the level of usefulness of

each of the collected variable, and (iii) increasing the

demand of good data from decision makers.

First, more analysis will help to understand where the

accuracy of the data is having most of its issues. For

instance, what specific regions, Woreda or schools

are the more likely to misreport or not report

information? or what specific variables are harder to

measure? Secondly, more analysis will be vital to test the usefulness of the data collected. It may be the

case that the data collected is extremely accurate, suppose that the collection team successfully records the

exact number and the type of walls in each school. However, after analyzing it, they realize that, despite

the efforts undertaken to count and characterized each wall, the variable is of little use for policy decision

making. More analysis and more attempts to make decisions based on that analysis can help us to diagnose

the places where the collection of data is more complicated to achieve, and it can challenge the level of

usefulness of the variables we collect.

The final reason is that, even if sometimes data is not perfectly accurate, it can help implementers to be

more effective in their interventions, and therefore increase the value of having good data. A Woreda officer

who realizes that data can help her to identify what schools are the most in need in terms of dropouts (even

if cannot tell her the exact number) is likely to increase her interest on improving the data collection process.

These three main reasons illustrate how the data collection system will improve as a consequence of more

analysis. Because the information impact cycle is circular, investing in more analysis will improve not only

the effectiveness of interventions but also the quality of data, both in terms of accuracy and usefulness.

Analysis will help us understand more easily and more quickly the data collected, and with better

understanding of the data collected, our capacity to make informed decisions will improve and these

informed decisions will increase the value of having data.

Although the current efforts to improve the data collection process are widely acknowledged and

encouraged, these series of arguments advocate for more emphasis on increasing data analysis. The

Data Collection

Figure 6: Circular information-impact cycle

Page 20: Advancing analysis in the education Sector of Ethiopia

9

available data has the capacity to enhance the sector, and this is demonstrated with concrete examples in

the second part of this paper.

Reason 2: The data is not ready for analysis.

Using more the available data sounds like an easy recommendation to follow, and it may be, as long as the

appropriate system for analysis is in place. The problem is that even when big efforts are made to generate

more use of the data at hand, joint analysis is hard to do. First, because the data rests on separated places

and, second, because when it is brought together it is hard to integrate. Up to now, the ministry does not

have any system that collects, integrates and prepares all the available datasets produce by different Offices

or Directorates: each institution has its own mechanisms to collect data, classify observations and process

the information. Although this level of independence is valuable, it makes very problematic any attempt to

answer key policy questions that require joint investigations. In the absence of such a system, there will

always be little coordination between the relevant institutions that add data to the sector.

Certainly, bringing together the data produced by different entities is a difficult exercise for everyone,

including the MoE. The communication between directorates is not sufficiently smooth and, especially, the

exchange of information is largely constraint by tedious bureaucracy. Secondly, even when the bureaucracy

is overcome and the data is brought to the same place, due to the large diversity of the country and the

specific way how each organization process its own data, it is extremely hard to integrate the information

collected by different institutions. Without any coordination between data collectors, the only possible way

to relate the information is by using the names of the schools or the names of locations. However, this

mechanism is very hard to implement: Ethiopia has more than forty thousand schools, more than one

thousand woredas, more than two hundred zones, and a total of eleven regions. Any attempt to match name

variables always brings difficulties: the names records may differ in accents, pronunciation, blank spaces,

etc. But in the case of Ethiopia, apart from the size of the education sector, an additional level of difficulty

is added: the numerous languages. Different languages differ in accents, pronunciation and even alphabets.

Names are translated into one alphabet by each entity in its own way, relying, probably, on the

pronunciation customs of the data collector. Without a unique identifier for each school that is present in

all the data sets produced in the sector, merging data has to rely on manual matching. Such a process relies

on a significant amount of intuition, sometimes guessing, and it demands a large investment on time, all

this creating data entry errors and sometimes matching failure.

Fortunately, in order to solve the integration issues, over the past years, unique school codes across the

whole nation have been adopted. The aim of the unique school code creation is to allow an easier integration

and analysis of the data generated through different directorates and agencies. The school codes were

Page 21: Advancing analysis in the education Sector of Ethiopia

10

generated by the EMIS directorate in 2008 E.C. (2015/16), and gradually, they have been adopted by

additional data contributors. Up to date, the EMIS, GEID, and NEAEA are using common school codes for

data collection. The GEID already incorporated the EMIS codes on their inspection process, providing

school codes to all the schools inspected. The NEAEA has adopted the school codes for NLA and EGRA

studies, however, grade 8, 10 and 12 examinations still do not use EMIS codes. Apart from them,

institutions like the British Council in the Deliverology study, the World Bank School Mapping and the

USAID Read projects use these codes for their studies.

Key policy questions require linking data sets. Now that the Ministry is working hard to implement the use

of school codes nationwide, the next step is to create a system that reunites all the data sets that the sector

produces. This system should set the guidelines for any actor that wishes to collect and analyze data in the

sector, making easier any attempt to do integrated analysis. Such a system would bring the capacity of

analysis to the next level.

Reason 3: Little feedback to lower levels of administration

Despite the laborious process that the data has to follow to reach the federal government, once it reaches

the Ministry’s offices, analysis almost never returns back. The lack of feedback to lower levels of

administration is identified as one caveat to overcome that can largely help to improve the data system as

well as the education sector as a whole.

As explained before, the arduous process for data to reach the federal level is full of challenges that not

only require large investments in time and resources but that also undermine the quality of the data. Yet,

despite the substantial effort from people in lower levels of administration to submit, useful information

stops at the federal government, or is only returned indirectly through national documents that are not

necessarily useful for action. When the data reaches the MoE useful analysis for the regions, zones, woredas,

and schools is almost never sent back. Paradoxically, thanks to the vast and wide amount of data that the

sector raises, useful feedback for lower levels of administration could be one of the best methods how the

MoE can help leaders to improve the sector.

Figure 7 shows and hypothetical scenario for which we advocate on this study: once the data collection

cycle is completed, the information flow should begin. The MoE should have the capacity and the

willingness to help all levels of administration to convert the data into impact.

Page 22: Advancing analysis in the education Sector of Ethiopia

11

Figure 7: Ideal return of information flow from the federal level

One objection that may arise is, why the MoE should provide analysis back to the regions and the woredas?

Or, if local offices or schools possess the data they collect, why they do not analyze and use the data

themselves? Should the central Government in a decentralized nation interfere in other levels

administration? In part, these questions have a political context that should be solved from the political side

(and will not be discussed in this document). On the technical side, the main reason is that data has the

potential to help all actors trying to enhance student’s learning.

Technically, the first reason why analysis for better decision making needs the leadership from the MoE is

that the central Government has the capacity to analyze the country as a whole, while local offices or schools

can only analysis what is happening inside their respective administrations. An example is a woreda office

that has the data to understand the dropout situation inside its administration, but that cannot analyze how

its performance is related to other locations. Are they the best or the worst performing woreda in the zone?

Are they part of the average performing woreda in the region? Are they receiving more or less resources

than other woredas? Similar to regions, that have to rely on the Federal Government to learn what is their

relative performance with respect to other regions in the country, lower levels of education cannot compare

themselves with the rest of the education sector unless the analysis is done from above.

The second reason, and probably the most critical one, is that higher levels of administration, like the MoE

and the regions, concentrate most of the human capital and financial resources that are required to achieve

useful analysis. Ultimately analysis should flow back to lower levels of education from all the offices

(Figure 8); this should be the ultimate goal, woredas and even schools should have the capacity to analyze

their own data and use it for school planning. But achieving this will take time because it requires

willingness, capacity, and infrastructure development that is not present yet in all places across the country.

Currently, data has little impact in lower levels of administration; unless the MoE and the regions takes the

leadership and send useful feedback, the data will take too long to have impact where is more needed.

Page 23: Advancing analysis in the education Sector of Ethiopia

12

Figure 8: Data flow and information flow from all levels of administration

This section provided a series of arguments that explain why providing useful feedback to lower levels of

administration after all the effort they undertake to collect the data is not only fair but perhaps it is the best

approach to improve the system. Bringing data into action at all levels can be one of the most effective

ways to help parents, teachers, principals, education experts, and all those involved in the education sector

to advance learning.

Reason 4: Analysis is usually aaggregated at the federal and regional levels

Finally, another reason why the analysis of information done at the federal level is not sufficiently useful

for leaders at lower levels of education is because, usually, it is aggregated at the regional or national level.

For instance, it is very common to see reports that compare the performance of the regions in a given

indicator by averaging the results of all the schools in each region. Although this information can be useful

to identify regions more in need, it has two main caveats. On the one hand, it compares very different

context, ignoring the significant diversity that is found inside each location. On the other, it hides actionable

information, or worst, provoke the wrong conclusions.

A useful example that illustrates this problem is shown in the figures below, that emulate one of the graphs

in the most recent national educational abstract. Figure 9 and Figure 10 summarize the gender parity index

(GPI) in grades 1-4 in 2011 E.C (2018/19) by region (due to the absence of school age population we

defined GPI as the girl students to boy students’ ratio. A score of 1 signifies that there is one girl enrolled

per each boy enrolled). Both figures present the same information, except that Figure 10 is more

disaggregated.

Page 24: Advancing analysis in the education Sector of Ethiopia

13

Figure 9: Gender Parity Index in Grades 1-4 by Region, 2011 E.C.

Figure 10: Gender Parity Index in Grades 1-4 by Region and by Woreda, 2011 E.C.

One conclusion that can be taken from the first graph, and that may guide policy to improve gender balance,

is that Harari (ranked 10/11) is one of the regions where urgent action is needed, more urgent than, for

instance, Oromia (ranked 7/11). The first issue with this deduction is that the graph is comparing places

that are very different in context, specifically in size. These statistics are summarizing the situation of

around 49 thousand students in Harari versus 8 million in Oromia, or of less than 1 hundred and more than

15 thousand schools, respectively. Since the score of Harari is grouping less information into one indicator,

it is more likely for its value to truly represent the reality compared to Oromia’s statistic.

When more information is combined, it is more likely that the presented statistic hides important aspects of

the reality. In this case, because the indicator is aggregated, it is hard to truly know whether Oromia is

Page 25: Advancing analysis in the education Sector of Ethiopia

14

hiding important information, and, therefore, it is important to disaggregate the illustration in order to see

the within variation before jumping into conclusions.

Apart from showing the aggregated GPI of each region (the gray line), just as the previous graph, Figure

10 shows the GPI score of each woreda inside its region (the circles). Certainly, the graph is still grouping

data at the woreda level, but the level of granularity allows us to infer more realistic conclusions. Notice

that the gray line shows exactly the same value that the previous graph. Thanks to the level of detail of the

illustration, it can be seen that the number of woredas in Oromia with low score (in red) largely overpass

those of Harari. Oromia is much bigger than Harari, and one woreda of Oromia can be as big as Harari.

This time the conclusion would lead to policy campaigns that should focus more not only in Oromia but in

specific woredas of Oromia.

More surprising is that, in the previous figure, SNNP stranded as one of the top 3 regions in terms of gender

balance with a score of 0.91. But when the data is disaggregated to the woreda level, it shows that a large

number of the woredas in SNNP are far below from its combined score. Due to the size of the region, this

situation is perhaps a worst situation in terms of size and number of students if it is compared to

Beneshengul-Gumuz, Dire Dawa or Harari, which were supposed to be worst off in the first figure.

Another advantage of a more disaggregated analysis is that it can provide useful information that drives

action. In the second illustration, higher officials at the federal level or experts at the regional offices could

easily spot those woredas that are lagging behind. For instance, the graph shows in red all the woredas

scoring below 0.8, and in SNNP one example is highlighted: Goba woreda, with a score of 0.71, is one of

the Woredas where action is needed.

These examples show how a small change in the level of aggregation can lead to different conclusions.

Analysis aggregated at the national and regional can provide an initial understanding of the situation.

However, it is important that the combinations presented provide an accurate representation of the reality.

This is hardly the case, especially, in the very big regions of Ethiopia where a lot of data is grouped into

one single indicator. Moreover, when comparing different aggregations, it is necessary to first ask whether

they are comparable among each other, which is probably not the case in such a diverse country.

But even if these issues are solved, aggregations are specialists on hiding islands of opportunities. The

example before shows how a slight change in the level of analysis has the potential to reveal more precise

and actionable conclusions. Especially if the aim is to make analysis more useful for lower levels of

administration, it is important to show disaggregated analysis that does not ignore the variances in contexts

and that can be easily translated into action. The joint analysis in section VI will provide specific examples

on how this can be done.

Page 26: Advancing analysis in the education Sector of Ethiopia

15

IV. Solutions to boost data analysis

During the past years, the MoE began to work on a series of solution that aimed to tackle the problems

described above. The following section presents the set of actions undertaken that will help overcome these

challenges and set the foundations for boosting more useful analysis in the sector.

In this section, the first and pivotal step is to set the foundations that will make analysis a possibility. The

creation of a data platform is the solution to that. Here, we will explore what is the data platform, why is

the right solution, and what are the steps we have undertaken to start its creation. Once the foundations for

analysis are in place, the joint analysis in the next section will show how integrated investigations, that take

into account the importance of providing useful feedback to lower levels of administration and does not

ignore variations in the data, can positively impact decision making and improve education.

What is the data platform?

Gradually, the education sector of Ethiopia has been adopting the unique EMIS school identifiers in order

to smooth the integration of data sets. With this in place, the sector is now in need of a comprehensive

system that reunites and coordinates all the efforts to transform data into impact.

For that purpose, during the past months, we started the creation of the data platform. The data platform is

a system that:

(i) establishes the guidelines for collection of data in the sector,

(ii) brings and integrates all the data collected into one place, and

(iii) provides useful tools for analysis and action.

Key policy questions require connected data sets, and the linking of data will only be possible if there is

coordination among the institutions involved in the collection process. That is why setting guidelines for

collection is important. For instance, all data collectors should include the EMIS school codes in their work

and, when available, unique national teachers and student codes should be adopted too. Moreover, there

should be in place a unique woreda, zone and region system of codes in order to allow the integration of

data sets produced by other Government institutions such as the Central Statistical Agency (CSA) or the

Ministry of Finance (MoF).

Once with the appropriate framework for integration in place, the data platform should reunite all the data

produced in the sector, and provide access to easy-to-integrate data sets to individuals involved in analysis

(education experts, policy makers, directorates, researchers, etc). But the data platform should not be only

a place to store data: it should also be a platform where useful tools for analysis and action are shared.

Page 27: Advancing analysis in the education Sector of Ethiopia

16

Having the data platform will enhance our capacity to transform data into impact. We identified five main

aspects on which such a system will help: (i) realizing system diagnosis, (ii) monitoring service delivery,

(iii) performing national policy evaluation, (iv) coordinating directorates actions, and (v) driving research.

A couple of examples showing how this can be possible are developed in Table 1.

Table 1: Five key functions of the data platform along with examples

•Starts with reliable measurement of national trends

•Identify teachers, schools, woredas that produce high (value-added) learning

•Particularly relevant in decentralized system Integrates admin data to examine equity in resource allocation Including key dimensions such as teacher quality

•Provides huge insights for reform (ESDP VI, GEQIP-E, Roadmap implementation)

1. System Diagnosis:

•Use of real-time data to monitor service delivery

•Strengthen accountability and motivation for learning outcomes

•Track performance effectively by prioritising results

•Strengthen evaluations by using data Improve assessment processes and capabilities across system

2. Monitor service delivery:

•Large-scale policy evaluation requires national data: national representativeness matters

•Evaluate large changes to curriculum, standards, licensing, exams, staffing etc.

•Review national or sub-national reforms by integrating data sources.

•Provide evidence for huge investments at the policy/system level

3. National policy evaluation:

•Platform accessed by directorates for basic analysis relevant to mandate

•Provide, for the first time, national, regional, woreda, school-level data on resource allocation (e.g. where are teachers and how is this changing)

•Provide targeted analysis on major initiatives such as inspection and SIP, to improve allocation of resources

•Platform improves prioritisation, focuses attention on targets and drives results

4. Directorate reforms:

•Shifts the balance so that the MoE becomes the lead for education data queries Crowds-in investment in the national platform, rather than private surveys.

•Drives the agenda, through commissioning analysis with government data

•Promote collaboration and better analysis through access to public universities and research centres in Ethiopia and elsewhere

5. Driving research:

Page 28: Advancing analysis in the education Sector of Ethiopia

17

Page 29: Advancing analysis in the education Sector of Ethiopia

18

V. Joint Analysis

This joint analysis is prepared collaboratively by EMIS, GEID and NEAEA by analyzing the integrated

EMIS, inspection, and learning outcomes data. The analysis and the tools presented here focus on trends

and regional variations in student outcomes including learning outcomes and internal efficiency outcomes,

determinants of learning outcomes and gaps, lessons learned and good practices and a set of

recommendations.

The objective of the integrated analysis is twofold. First, to demonstrate the potential of combining the

different sources of information and, second, to resolve key policy questions by testing the accuracy of the

data, identifying indicators gaps, finding specific places for action, and creating tools for analysis. To the

best of our knowledge, this is the first time that any study analyzes the EMIS, Inspection and Learning data

sets together.

To investigate the relationship and the complementarity among the key indicators, we selected key

indicators in each data set, and we studied their relation with the inspection level. Our hypothesis is that

schools with higher inspection level should have better results in EMIS and Learning indicators. We,

therefore, investigated the relationship between inspection results and EMIS, and the relationship between

inspection results and learning indicators.

Also, the integrated data set allows us a more comprehensive understanding of the sector because a single

location can be analyzed from different points of view. For that, we created tools that allow the user a

comprehensive overview of the key indicators in his or her administration, and to easily identify where and

on what they need to place more attention. Namely, we created one report card for each school, woreda,

zone, and region in the country, and three interactive dashboards.

Part 1 of this section will describe the final data set we use. Part 2 will present descriptive analysis of the

key indicators. Part 3 will discuss main findings. In part 4, we will show how the data can help us to identify

positive outliers in low performing locations. And Part 5 will explain the tools we have created for analysis.

1. Data Description and Integration

To prepare the joint report, a final merged data set at the school level was created. This data set integrates

the EMIS, three rounds of Inspection, Deliverology study, EGRA, NLA, Grade 10 and Grade 12

examinations, school GPS coordinates, and Woreda, Zone, and Region shape files. The objective of this

section is to provide a comprehensive overview of the final data set that is used for the joint report and

analysis.

Page 30: Advancing analysis in the education Sector of Ethiopia

19

Figure 11: Network of integrated education sector datasets

The final data set has more than 37,000 primary and secondary schools, of which more than 32,000 have at

least one round of inspection data and more than 7,000 have at least one learning outcome data set (Table

2).

Table 2: Number of schools that have each type of data in the final data set

Since EMIS provides administrative data on all the schools in Ethiopia (including the assignment of unique

school codes), we used EMIS data as the base to prepare the joint data set. It means that at the time of

matching on either school codes or school names, we excluded observations if not found in the EMIS data.

In other words, we kept schools from Inspection and learning data sets only if they had a corresponding

record in the EMIS.

Common variable

Page 31: Advancing analysis in the education Sector of Ethiopia

20

Consequently, all schools that have the Inspection and/or NEAEA data also have the EMIS data. However,

not all schools that have Inspection data have learning data, and vice versa. Table 3 shows this relationship.

It shows that 6,062 out of 7,011 schools (85%) that have learning data, also have the Inspection data.

Table 3: Number of schools with Inspection and Learning data in the final data set

A. EMIS Data

The final data set has 37,777 schools, and all of them have EMIS information for the year 2011 E.C.

(2018/19). As shown in Figure 12, these schools enroll almost 21.5 million students, from which 9.9 are

girls and 11.4 are boys.

Figure 12: Number of students in the final data set

The following map (Figure 13) shows the distribution of schools in the final data set across regions

Page 32: Advancing analysis in the education Sector of Ethiopia

21

Figure 13: Distribution of schools by Region

B. Inspection Data

The final data set has information from three different Inspection surveys. Table 4 shows the number of

schools that were inspected in each round. After merging each of these three data sets with EMIS data

separately, we find that 32,259 schools had at least one type of inspection. Of these 32,259 schools, 84%

have results from the first round of inspection, 61% from the second round and 54% from Re-inspection.

Table 4: Number of schools with inspection data in the final data set by round of inspection

Table 5: Schools’ most recent inspection result

Additionally, if we were to take the most recent inspection result

for any school, 61% of the schools could use inspection results

from the most recent round of inspection (R2), 21% from the re-

inspection and 17.5% from the first round of inspection (Table 5).

Page 33: Advancing analysis in the education Sector of Ethiopia

22

Figure 14 shows the spatial distribution of schools by their level of inspection. It can be observed that the

Level 2 schools are the most present in the data set followed by Level 1 schools. A non-insignificant

proportion of schools are Level 3 and almost none of the schools are Level 42.

Figure 14: Spatial distribution of schools colored by inspection level

Such deduction is confirmed by the Figure 15. The graph shows the distribution of levels across all schools

with inspection data in the final data set. Performance score sets the cut off points for each inspection level

at 50%, 70%, and 90%, and it can be observed that most schools are either Level 1 or Level 2.

Figure 15: Distribution of inspection level in the final data set

C. Learning Outcomes Data

The learning outcomes data is the least available type of information in the final data set. In general, we are

able to include at least one record of learning information for 7,011 schools (only 18.5% of the total number

of schools in the data). Moreover, this information comes from different data sets that are not necessarily

comparable. It includes EGRA, Deliverology, NLA, and Grade 10 and Grade 12 examinations. Table 6 lists

the number of schools corresponding each of these data sets3.

2 However, it is important to keep in mind that the GPS coordinates are not available for four regions.

3 The total number of schools with at least one type of learning outcomes data is included in the last row of the table

for reference.

Page 34: Advancing analysis in the education Sector of Ethiopia

23

Table 6: Number of schools with learning data in the final data set

It can be observed that the number of schools that have

each of the data sets varies a lot. It is expected for EGRA,

Deliverology and NLA to have such a small number of

schools since these are sample-based studies. Thus, the

number of schools with this information depends on the

size of the sample and on our capacity to match the

school with EMIS data. In the case of Grade 10 and

Grade 12 examinations, their presence in the final data set depends on the number of schools offering these

grades and our capacity to manually match the name of the schools.

Figure 16: Woredas with sample-based learning results in the final data set

Page 35: Advancing analysis in the education Sector of Ethiopia

24

Figure 17: Woredas with national exams learning results in the final data set

The analysis of learning outcomes data should be performed with care because studies differ in type (census

vs sample) and in grade exanimated (EGRA evaluates students in grades 2-3, Deliverology evaluates

students in grade 4, NLA evaluates students in grade 8). Sample-based studies are representative at the

regional level whereas national examinations should be representative at all levels. The graphs above

(Figure 16 and Figure 17) provide insights to understand the difference in coverage of each data set across

the country in the final data set.

D. Spatial Data

Apart from EMIS, Inspection and NEAEA, other data sets were also included in the final data set to expand

the scope of the analysis. Especially, geographical data was added which allowed us to understand the

spatial distribution of schools and their relationship with main indicators.

Table 7: Number of schools with spatial information in the final data set

The spatial or geographical data comes into two types of

files. The shape files of the Regions (Regshape), Zones

(Zonshape) and Woredas (Worshape), and the exact GPS

coordinates of the location of each school in the map

(Mapwb). Table 7 shows the number of schools that have

information for each of these variables. As a reference, this

table includes the total number of schools in the first row.

First, the slight decrease in the number of schools with spatial shape information is due to the challenges in

matching of data sets. The names of locations varied largely between data sets, making the integration of

Page 36: Advancing analysis in the education Sector of Ethiopia

25

information harder. Second, there are only 30,834 schools with exact geocoordinates as the mapping

exercise is still ongoing. So far, around 80% of the schools in the country have been mapped.

Figure 18: Availability of school GPS coordinates in the final data set

Figure 18 shows the spatial distribution of schools

that have GPS coordinates by region. One can see

that the remaining 20% of the non-mapped schools

are concentrated in specific regions: Afar, Gambella,

Somali and SNNP.

In fact, Table 8 confirms this point by comparing the total number of schools and the number of schools

with GPS coordinates in all the regions. It implies that any spatial analysis at the national level should be

interpreted with care as it would have regional biases. Nevertheless, the GPS points provide numerous

opportunities to analyze how the EMIS, Inspection or learning results relate with the geography of the

country inside well documented regions.

Table 8: Availability of school GPS coordinates in the final data set

This section provided a detailed account of the final data set that is used for the joint analysis, as well as a

few examples on how the data can be used. The integration of several data sets allows an investigation of

their relationships with each other. Thanks to including not only the main data sets (EMIS, Inspection and

Learning) but also additional data sets, the possibilities of the joint analysis have been enlarged. The

following section will examine the main indicators selected for the joint analysis.

Page 37: Advancing analysis in the education Sector of Ethiopia

16

2. Key Indicators

We argued before that none of the three main sources of data alone is comprehensive enough to give a

complete understanding of the education sector. But, when analyzed together, they have the potential to

provide a more complete picture. To demonstrate this, we identified key indicators from all the sources of

information; EMIS, Inspection and Learning.

This part of the report will present descriptive analysis of the main indicators. Apart from showing the

aggregated result at the regional level, the analysis offers the opportunity to investigate the variation within

each region. The objective is to set the foundation for the joint analysis and to provide the rational for the

analytical tools that we created to help leaders convert data into impact.

We have selected a combination of indicators that evaluate the sector in terms of access, equity, efficiency,

quality, and learning outcomes.

A. EMIS Indicators

i. Girls to Boys’ Ratio

Due to the absence of reliable population data, we use the girls over boys’ ratio (instead of using the

traditional Gender Parity Index) to measure the relative access to education of females and males. This

indicator has been used in projects like GEQIP-E to monitor the progress towards more inclusion of girls

in education.

In the final data set, the national girls to boys’ ratio is 0.88, meaning that there are overall more boys than

girls enrolled in the country. However, the measure varies largely among locations. While there are regional

differences, the indicator varies largely among woredas within the regions. For instance, data shows that

the girls to boys’ ratio among woredas ranged from a minimum of 0.35 and a maximum of 1.35.

The gray line and label in Figure 19 indicate the girls to boys’ ratio for each region. The blue dots represent

the ratio for all the woredas inside each region. Therefore, along with showing the aggregated indicator

result for every region, the chart offers the extent of woreda level variation.

Page 38: Advancing analysis in the education Sector of Ethiopia

17

Figure 19: Girls to Boys Ratio by Region and Woreda

On average, there are four regions that are above the national result: Addis Ababa, Tigray, Amhara and

SNNP. Addis Ababa’s result, being larger than 1, shows a large gender imbalance in the favor of girls,

while Tigray and Amhara regions are the closest to achieving gender balance. The graph also highlights

that some regions have a lot of variation among woredas. For example, a lot of the woredas in SNNP and

Oromia score below 0.8. However, in population size and the number of students enrolled, these low scoring

woredas could be significantly big.

ii. Grade 2 to Grade 1 Ratio

Grade 2 to Grade 1 ratio measures the proportion of students that were enrolled in grade 1 and continue

studying in grade 2 the next year. In our final data set, the average Grade 2 to Grade 1 ratio (2011 E.C.

(2018/19)) in the country is 0.82. It means that, on average, 82% of the students who started grade 1 in

2010 E.C. (2017/18) were enrolled and studying in grade 2 in 2011 (2018/19).

The data shows that the average indicator result varies among regions, ranging from a minimum of 0.68

and a maximum of 0.97. The gray lines and labels in the Figure 20 illustrate the same. While regional

averages are useful to get the big picture, the graph provides an opportunity to analyze details. The

exaggerated variation among woredas in Somali region undermines the reliability of the data in this region.

Likewise, Oromia has many woredas with very low results. Nevertheless, showing the woreda-level

variation is also useful to identify positive outliers. One can notice the woredas where G2/G1 ratio is close

to 1 or above, even in the regions with relatively low averages. This suggests that even in the worst

Page 39: Advancing analysis in the education Sector of Ethiopia

18

performing regions there are places that are showing good performance and might have the potential to

become role models for others.

Figure 20: Grade 2 to Grade 1 Ratio by Region and Woreda

iii. Survival Rate to Grade 4

In our data set, the national average of survival rate to grade 4 is 0.60, meaning that, on average, only 60%

of the students that start school complete the first cycle of primary. Figure 21 shows the indicator average

of all schools inside each region and the blue dots represent the indicator’s average of all the schools inside

each woreda. In contrast with the Grade 2 to Grade 1 ratio, apart from the lower achievements, it can be

appreciated that the variation inside regions is wider. Three regions score above 0.70: Addis Ababa, Tigray,

and Amhara. A group in the middle composed of Harari, Beneshengul-Gumuz, Gambella, Dire Dawa and

Oromia score between 0.50 and 0.60. Finally, the group of regions in the bottom that scores below 0.50 are

Afar and Somali. Again, the chart offers the opportunity to analyze the region’s woredas variation. It can

be seen that there are woredas inside almost all regions that have very low scores. For instance, in the best

performing regions, Tigray and Amhara, a non-negligible number of woredas have scores below 0.60.

Page 40: Advancing analysis in the education Sector of Ethiopia

19

Figure 21: Average Grade 4 Survival Rate by Region and Woreda

iv. Students per Teacher

The number of students per teacher or pupil-teacher ratio (PTR) is an indicator that measures the number

of students who attend a school divided by the number of teachers in the institution. Usually, this indicator

is calculated at the school level as an approximation for class size.

Nationally, according to our data set, there are 36.86 students per teacher. We divide the total number of

students by the total number of teachers to calculate this statistic. Since teachers are not perfectly assigned

in all parts of the country, this measure varies a lot, depending on the location of the schools. To get a better

sense of how, on average, class sizes look like, we also calculated the average of PTR of all schools in the

country. According to our results, the schools’ national average of pupil teacher ratio is 42.49. The

difference between this calculation and the previous one can be interpreted as an indicator of the

discrepancy between the availability of teachers and their efficient allocation across schools.

To illustrate these two indicators and its variation among locations, Figure 22 shows the number of students

per teacher by region and by woreda, and Figure 23 shows the schools’ average of the number of students

per teacher (PTR) by region and by woreda. The gray line and label shows the indicator result of each

region and the blue dots represent the indicators result inside each woreda.

Page 41: Advancing analysis in the education Sector of Ethiopia

29

Figure 22: Number of students per teacher by region and by woreda

Figure 23: Average of schools’ PTR (students per teacher) by region and by woreda

In both calculations, the aggregated pupil-teacher ratio is arguably low in most of the regions. Only Somali and

Oromia show values larger than 40. However, in terms of disparity inside the regions, there seems to be

opportunities of improvement. First, the large amount of variation inside Somali calls the attention to the urgent

need to reduce class sizes in the region’s schools, undermines the credibility of the data, or both. Second,

although we acknowledge the challenges that are involved in the assignment of teachers, the fact that regions

have woredas with both low and high pupil-teacher ratios suggests that there is scope for improvement in the

efficient allocation of teachers in all regions.

v. English and Mathematics Books per Student

In order to know the number of books available per student, the EMIS team collects the number of textbooks

per subject and per grade in every school. Textbooks are a crucial resource for the successful learning of students

Page 42: Advancing analysis in the education Sector of Ethiopia

30

in the Ethiopian education system. Ideally, each student should have access to one textbook for each of the

subjects that student is undertaken.

To analyze the availability of textbooks across locations, we choose the reported availability of language and

Mathematics textbooks for two main reasons. First, these two subjects are the foundations of learning, and

second, because this are the only two subjects that are present in all grades of the curriculum in all the regions.

Overall, in Ethiopia, there were reported a total of 0.82 Mathematics and 0.79 English books per student

enrolled. However, suggesting an inefficient allocation of resources, the analysis of the regional situations shows

that while some regions have more than 1 book per student, others have less than one book per student.

Figure 24: Mathematics textbook pupil ratio by region and by woreda

Figure 24 and Figure 25 show the availability of Mathematics and English textbooks by region and woreda,

respectively. The figures show that there are four regions, Beneshengul-Gumuz, SNNP, Oromia, and Somali,

that on average need special attention in terms of availability of textbooks because the aggregated number of

Mathematics and English textbooks in those regions are lower than the number of students. On the contrary, the

charts show that other regions have more books than students, signifying inefficient allocation of textbooks

between regions. Moreover, regions like Amhara, that despite having more than one book per student overall

also have woredas with low number of textbooks per student, highlight problems of efficient allocation of these

resources within regions.

Page 43: Advancing analysis in the education Sector of Ethiopia

31

Figure 25: English textbook pupil ratio by region and by woreda

B. Inspection Indicators

Performance

On average, schools in Ethiopia have an inspection performance score of 57%. Figure 26 shows the distribution

of average performance score across regions and woredas. The gray line and label show the average result of

each region and the colored dots represent the average result of each woreda. The region with the best aggregated

score is Addis Ababa, followed by Tigray. In descending order, Harari, Amhara, Dire Dawa, SNNP, Oromia,

Beneshengul-Gumuz, and Gambella have scores between 0.50 and 0.60. Finally, Somali and Afar lay in the

bottom of the graph with average performance scores below 0.5.

The chart offers the opportunity to analyze the region’s woredas variation in terms of their average inspection

score. The red dots, that highlight values below 0.50 (cutoff for level 1), show that regions in the group of the

middle have woredas that are lagging behind.

Page 44: Advancing analysis in the education Sector of Ethiopia

32

Figure 26: Average performance score by Region and Woreda

Figure 27 draws the spatial distribution of average performance in each zone of the country. In this case it is

more evident that the zones with lowest score in terms of performance are concentrated in Somali and Afar.

However, it is essential to highlight that other regions have zones that have low performance schools too.

Figure 27: Spatial distribution of average performance inspection result by Zone

Page 45: Advancing analysis in the education Sector of Ethiopia

33

C. Learning Indicators

i. National Learning Assessment

The National Learning Assessment evaluates a regionally representative sample of students in Grade 8. The

assessment randomly selects 40 students in each school, and it evaluates them on Mathematics and English

questions. The average NLA score of schools by region is represented by the gray lines in Figure 28, and the

average score of each school in each region is represented by the green dots. On average, some regions did

better than others. The highest scorer was Addis Ababa with a mark of 39.09, and the average lowest score was

in Beneshengul-Gumuz with 27.35, showing significant variation in performance between regions. However,

the fact that the score of the sampled schools are widely spread around the regional averages support one of the

main arguments of this report i.e. there is also large variation within regions.

Figure 28: Average National Learning Assessment score by region and by school

ii. Grade 10 and Grade 12 Examinations Results

At the end of the academic year, students in grade 10 and grade 12 take a national examination to determine

whether they can graduate from the first and second cycle of secondary education, respectively, and continue

with the next level of education. The main advantage of using national examination results is that this exam is

undertaken by all schools in the country, eliminating any source of selection bias on the analysis. The main

caveat, however, is that the national examinations are only available for secondary schools, which comprises a

low proportion of the total schools when compared to primary.

Out of 100, the average grade 10 examination score of schools in our data was 40.80. Figure 29 shows that, on

average, four of the 11 regions scored above the national average. Similarly, the average grade 12 examination

Page 46: Advancing analysis in the education Sector of Ethiopia

34

score of schools in our data was 48.17 out of 100. Figure 30 shows that, on average, only two of the 11 regions

scored above the national average.

Figure 29: Average Grade 10 exam by region and by school

Figure 30: Average Grade 12 exam by region and by school

Page 47: Advancing analysis in the education Sector of Ethiopia

35

3. Exploring Relationships between EMIS, Inspection and learning

A. Distribution of Schools by Inspection Level

One of the key indicators in the inspection data is performance. It is calculated as a weighted average of the

input (.25), process (.35) and output (.4) indices, and therefore provides a score for the school’s overall

performance. Using the most recent inspection data4 that is available for any given school, the histogram below

shows the frequency distribution of schools in Ethiopia.

Figure 31: Distribution of schools by inspection performance

More than 90% of the schools fall within Inspection Levels 1 & 2. Additionally, the distribution highlights that

there is a big jump in the number of schools on the right side of the thresholds i.e. from Level 1 to Level 2 at

50% performance score and from Level 2 to Level 3 at 70%.

To further explore these inspection results; first, we divide schools into several bins of 10% performance. In

other words, we group schools based on their performance index scores. It mainly allows comparison of schools

within an Inspection Level but still at an aggregated bin level. Second, we use EMIS indicators as measures of

comparison at the bin level.

B. Relationship between EMIS indicators and Performance

The EMIS data enables us to measure several school-level indicators on equity, efficiency, and quality such as

Gender Parity Index, Grade 2 to Grade 1 ratio, Grade 4 Survival Rate, and Pupil-Teacher Ratio. An interaction

between these EMIS indicators and Inspection is useful; (i) to understand the course of relationship between the

4 The latest round of Inspection data is Round 2 but it covered only 60% of the schools by 2011 EC. Therefore, for all the

schools in the EMIS which were not yet inspected in Round 2 but had data from either Re-inspection or Baseline, we use

Re-inspection or Baseline values.

Page 48: Advancing analysis in the education Sector of Ethiopia

36

two datasets and (ii) to establish correlations, if any. This exercise is especially interesting since both Inspection

(via external assessment) and EMIS (via self-reporting) generate data on school performance.

All the charts below have Performance Index (as bins of 10% each) on the x-axis and different EMIS indicators

on the y-axis. Bar labels display the average value of the EMIS indicator across all schools which fall within

the same bin.

Due to lack of reliable school-age population estimates at

the school level, Gender Party Index (GPI) is calculated

here as the ratio of total female enrolment by total male

enrolment. Then, in this case, GPI indicates the difference

in access to education for girls versus boys. The graph

shows that GPI seems to improve with performance,

although change is small. Except for the schools with

performance higher than 90%, GPI shows disparity in

favor of girls.

This graph shows that number of students per teacher falls

with increase in performance index. Therefore, consistent

with literature on small class sizes for quality education,

pupil-teacher ratio (PTR) and performance are negatively

related. The difference in PTR, however, appears to be

quite small for the schools which fall between 40% and

90% performance scores.

Figure 32: Gender Parity Index by performance bins, 2011 E.C.

Figure 33: Pupil-Teacher Ratio by performance bins, 2011 E.C.

Page 49: Advancing analysis in the education Sector of Ethiopia

37

Apart from equity and quality, schools’ efficiency in utilizing the limited resources is crucial, and we expect

performance to be positively related to efficiency indicators. Ethiopian education system is challenged with

high dropout rates even at the early grades. As one of the efficiency indicators, Grade 2 to Grade 1 ratio

measures retention rate for Grade 1 which is usually the entry point in schooling system. As we can see,

there is a slightly upward trend in G2/G1 ratio. But schools which are closer to the cut-off points have no

discernible difference in their averages, especially those which fall Level 1 and 2 categories.

Figure 34: G2 to G1 Ratio by performance bins, 2011 EC

Figure 35: Grade 4 Survival Rate by performance bins, 2011 E.C.

Page 50: Advancing analysis in the education Sector of Ethiopia

38

The other indicator of efficiency is survival rate. We use Grade 4 survival rate which indicates the

percentage of students from a cohort enrolled in Grade 1 and finish Grade 4 i.e. first cycle of primary

education. The graph above indicates a positive correlation between the survival rate and the performance.

Moreover, it seems that survival rate is relatively more correlated with performance than other EMIS

indicators. Therefore, in the sub-section below, we discuss this relationship in greater detail and draw some

insights.

Analysis I: How do performance and Grade 4 survival rate relate?

Figure 36: Grade 4 Survival Rate versus Performance

The scatter plot above shows all the schools based on their performance score and G4 survival rate. The

scatter plot is dense between 0.50 and 0.70 as most schools lie within this range. Moreover, since there is a

lot of variation in the data, the two measures only have a weak positive correlation of 0.2.

Nevertheless, a linear regression model of Grade 4 survival rate on performance is found to be statistically

significant. In simpler words, if performance score of a school improves by .1 or 10%, G4 survival rate

improves by 5.45%. Because of the simplicity of the model, this result bests serves to indicate positive

relationship between the two, not a causal effect of performance on survival rate.

Moreover, a low correlation measure suggests that performance is unable to explain much variation in the

model i.e. estimates predicted using this model are unlikely to be precise.

P-value: < 0.0001

Page 51: Advancing analysis in the education Sector of Ethiopia

39

Equation: Survival Rate 2011 G4 = 0.544697*Performance + 0.289568

Coefficients

Term Value StdErr t-value p-value

Performance 0.544697 0.0156917 34.7123 < 0.0001

intercept 0.289568 0.0090055 32.1547 < 0.0001

Figure 37: Grade 4 Survival Rate versus Performance with tread line

We can draw two useful insights from this analysis. First, performance is defined to be a cumulative

measure of school’s performance covering infrastructure, management, community participation, teaching

and learning. But it seems that performance index is unable to capture the full picture. In practice as much

of the variation in survival rate is left unexplained.

Second, there is a lack of even moderate correlation between EMIS and inspection indicators across the

board5, not just Grade 4 survival rate. This should be further investigated in order to know why the

relationship is inexistent and whether data collection and inspection processes have to be improved such

that they are consistent with each other.

C. Relationship between Performance and Learning

Thanks to the integrated data set we can continue investigating questions that relate different sources of

information. In this case, we will explore the relationship between the inspection and the learning results.

5 Check Annex YY

Page 52: Advancing analysis in the education Sector of Ethiopia

40

This section presents descriptive and statistical analysis of the level of inspection of a school and the average

result of its students in exams.

This analysis is relevant for two main reasons. First, it allows us to investigate whether the characteristics

of a school that the inspection team evaluates are related to the level of learning that happens in each school.

Naturally, we would expect schools with superior inputs, process and outputs results to have better learning

outcomes. The main question to study is, are inspection results related to learning? Second, if we find that

this is the case, we could investigate what characteristics evaluated in the inspection process are more

relevant for learning. Are infrastructure, resources, planning or students’ engagement, and other

characteristics of any specific school equally important for learning to happen?

In the previous part we found that, even if it cannot explain the variation, the level of inspection of a school

was a significant predictor of the EMIS results. However, we will see in this section that performance results

cannot explain the variations in learning results nor they are not a significant predictor of them.

To investigate this relationship, we use schools that had inspection data available and that undertook the

national grade 10 examination6. From the 2,604 schools in our data set that had Grade 10 examinations,

2,034 (78%) also had inspection results. The histogram in Figure 38 shows the distribution of grade 10

scores of all the schools that took the exam, colored by availability of inspection data. The fact that the

majority of the schools which took the exam have inspection information, and that inspection results are

accessible across the whole scores’ distribution, reduces the probability of bias in our analysis.

6 . Grade 10 national examinations is the more complete data set in terms of learning. Moreover, we came to similar

conclusions when we perform the same analysis with the other learning results (check annex).

Page 53: Advancing analysis in the education Sector of Ethiopia

41

Figure 38 Grade 10 exam results with and without inspection

Now, exploring the relationship, Figure 36 graphs Performance Index (as bins of 10% each) on the x-axis

and the average exam score on the y-axis. Bar labels display the average value of grade 10 exam across all

schools which fall within the same bin. Contrary to

the finding between EMIS and inspection, the

graph indicates an absence of correlation between

the learning score and the performance results.

There seem to be no difference in learning results

between schools among the bins in the middle.

More surprisingly, the worst performing schools,

on average, show the best results in learning. To

validate this deduction, in the sub-section below,

we discuss this relationship in greater detail and

draw more conclusive insights.

Figure 39: Grade 10 exam score by performance bins, 2011 E.C.

Page 54: Advancing analysis in the education Sector of Ethiopia

42

Analysis II: How do performance and Grade 10 exam results relate?

Figure 40 Grade 10 exam results versus Performance

The scatter plot in Figure 40 shows all the schools based on performance and Grade 10 exam score. The

graph shows a great amount of variation both in the x and y axis: low performing schools had both low and

high results in the exam. Similarly, high performing schools had low and high results in the exam. To the

naked eye, it is already possible to see the absence of correlation between the two variables.

To study this further, we calculated a linear regression of Grade 10 exam results on performance, and we

found that the model and the coefficients of the model are statistically insignificant. The model, described

below and illustrated in Figure 41, corroborate our prediction in the previous sub-section. The p-value and

R-squared mean that the model is insignificant and that the correlation between these two variables is

nonexistent.

P-value: 0.68866

R-squared: 0.0000855

Equation: Grade 10 exam result = 0.776669*Performance + 40.4361

Coefficients

Term Value StdErr t-value p-value

Performance 0.776669 1.93811 0.400736 0.68866

intercept 40.4361 1.15792 34.9213 < 0.0001

Page 55: Advancing analysis in the education Sector of Ethiopia

43

Figure 41: Grade 10 exam results versus Performance with trend line

In conclusion, we found, first, that the inspection performance results are not a significant predictor of the

results of scores in the grade 10 exam, and, second, that they are unable to explain any of the variation in

the learning results.

D. Joint Analysis with Additional Datasets

The possibilities of joint analysis go beyond a single report. That is the reason why, we have focus our

efforts on creating the foundations that will allow any actor in the sector to perform integrated analysis. We

have shown in previous sections some of the various possible analyses using the main data sets EMIS,

Inspection and Learning. In this section, we will show simple but powerful examples of the use of

integrating additional data sets. Apart from the spatial data that we have used across the report, we will use

the school grant allocations, and the selected Phase 1 and Read II schools, to solve questions such as, is the

allocation of school grants equitable? Are the schools selected in specific projects representative of all the

schools in the country?

Analysis III: How do performance and the allocation of school grants relate?

All government schools in Ethiopia receive annual school grant from the MoE. This table explains the grant

calculation & allocation process.

Page 56: Advancing analysis in the education Sector of Ethiopia

44

Table 9: School grant allocation matrix

Level ETB per student

Oclass 60

ABE 50

Grade 1-4 50

Grade 5-8 55

Grade 9-10 60

Grade 11-12 70

ABE <200 10,000

Primary <200 10,000

Secondary <200 12,000

Patoralist woreda top-up 5%

Emerging region top-up 5%

IERC top-up 10,000

Using the school grant data from the planning directorate and enrolment figures from EMIS, we calculate

grant amount per student for all the schools. The scatter plot below has performance scores on the x-axis

and grant per student on the y-axis7. We can clearly see that for a large majority of schools, the grant amount

per student ranges from 50 to 70 ETB, irrespective of the performance score. This conclusion indicates an

important aspect of equity in the disbursement of school grant at the national level.

7 This visualization excludes those schools where per student amount was more than 250 ETB as they seem to serve

as outliers.

Page 57: Advancing analysis in the education Sector of Ethiopia

45

Figure 42: Average school grant per student, 2011 E.C.

Further, we explore this relationship in greater detail using performance index bins. We find that the grant

amount per student is higher (on average) for the schools that fall in the lowest performance bins.

Figure 43: Average school grant per student by performance bins, 2011 E.C.

Page 58: Advancing analysis in the education Sector of Ethiopia

46

However, from the point of view of the school administration, it is arguable that it is the total amount

received what really matters. School management along with teachers and PTSAs draft school improvement

plans subject to budget constraints. Therefore, the graph below shows average total school grant received

by a school within the performance bin.

Figure 44: Total school grant per school by performance bins (average), 2011 E.C.

Here, each bar height is calculated by taking the sum of school grant of all the school in that performance

bin and dividing it by the number of schools.

It is interesting to note the total value of school grant is increasing with performance. In other words, a

school with performance score of 20-30% receives 15,000 ETB on average, whereas the total amount of

grant sums to >50,000 ETB for a school with performance score higher than 70%.

Schools with greater sums of money in their hands are more likely to be able to execute bigger plans like

construction of classroom, provision of WASH facilities, etc. and contribute in students learning.

Page 59: Advancing analysis in the education Sector of Ethiopia

47

Figure 45: Average school size by performance bins, 2011 E.C.

To better understand this pattern, we look at the relationship between school size (i.e. total enrolment) and

performance. When we graph the average total number of students with respect to performance bins8, we

can see that this distribution is consistent with that of the average total school grant above.

Whether size of the school contributes in the school’s performance (and by how much) is something that

requires further investigation9. Nevertheless, this report suggests careful consideration towards school grant

calculation so that schools with lower performance score could be provided with funds.

Analysis IV: Phase 1 and Read II schools

We can understand how is the distribution of targeted schools in specific projects such as Phase 1 schools

from GEQIP-E and schools from Read II, and relate this to EMIS, inspection or learning results. The

objective of this sub-section is only to give descriptive examples of the type of analysis that can be done

regarding specific projects. The following map show the spatial distribution of Phase 1 and Read II projects

in the country.

8 The colour in this graph correspond to the percentage of total students enrollment. For instance, the darkest bar i.e.

50-60% performance bin, includes the highest number of schools and in total, they enrol 43% of the total students. 9 As a start, Annex XX presents relationship between school size and learning outcomes.

Page 60: Advancing analysis in the education Sector of Ethiopia

48

Figure 46: Woredas with Phase 1 and Read II schools

Similarly, thanks to the integrated data set we can relate these variables with EMIS, Inspection or Learning

results. For instance, we can analyze the number of schools and number of students that these two projects

cover (Figure 47). Or, we can investigate whether these projects cover a representative sample of schools

in terms of inspection performance results (Figure 48).

Figure 47: Number of schools and students in Phase 1 and Read II schools

Figure 48: Frequency of Performance Score in the final data set, colored by project

Page 61: Advancing analysis in the education Sector of Ethiopia

50

4. Islands of opportunities: positive outliers

The analysis in previous sections shows that opportunities in education vary largely among regions and that

this variation is related to the geographic location of schools. We have seen that there are some regions that

are lagging behind in some of the indicators. Also, that, regions that on average are performing well, have

zones, woredas and schools with low scores.

For some of the analysis, we have been focusing on the woredas that are below the average of its region,

because we believe that identifying them is the first step to create positive change. However, we also believe

that in every location there are assets of opportunities: positive outliers that demonstrate that better

education is possible in every context. For example, we believe that identifying the best performing schools

in the worst performing locations can teach us more about how they are overcoming contextual challenges.

To provide one example of how the data can help us identify positive outliers, the following table selects

the bottom 15 zones in the country in terms of inspection performance. Despite these zones lagging behind

with respect to other zones in terms of quality and performance of its schools, they also have schools that

are performing relatively well in terms of learning. Table 10 selects, in each of these zones, the top 3 schools

with the highest scores in the Grade 10 national exam. Policy makers in such zones could target these

schools to learn from them what they are doing different in order to achieve better learning results.

In the same manner, we realized that it can be useful for policy makers and leaders to be able to identify

the schools, woredas or zones that are at the top and bottom of any indicator’s distribution. For that reason,

we created a tool that allows any person to identify bottom and top performers in any specific location. We

will explain this and other tools for analysis we have created in the following section.

Page 62: Advancing analysis in the education Sector of Ethiopia

51

Table 10 Top performing schools in bottom performing zones

5. Tools for analysis

We argued that the analysis done should be helpful for all levels of administration. We believe that the data

platform and the integrated data sets have the potential not only of providing a comprehensive

understanding of the sector as a whole, but also of helping regions, zones, woredas and schools to improve

their educational outcomes.

In this section, we will present four tools that we have created in order to help leaders monitor, and make

better policy in the sector and the school(s) they are administrating. These tools are only a few examples of

the numerous tools that could be created with the data at hand and with the establishment of the data

platform.

Page 63: Advancing analysis in the education Sector of Ethiopia

52

First, in order to provide a useful and actionable overview of the educational situation in each school,

woreda, zone and region of the country, we created more than 37,000 report cards. We have created one

report card for each school (>37,000 school report cards), each woreda (>1000 woreda report cards), each

zone (>100 zone report cards), and each region (11 region report cards). These reports show basic

information of each location, such as number of students and number of teachers enrolled, as well as their

results in each of the key EMIS, Inspection, and learning indicators. The main objective of the report card

is to provide leaders with a comprehensive overview of their performance in each of the key indicators, and

help them to identify where their administration is lagging behind. For regions, zones and woredas, we

added additional pages that explore more in detail the result of each indicator. For example, the overview

of a specific woreda could highlight that the woreda have a low GPI. The extra information we added would

allow that woreda to identify in what grades the GPI is dropping the most and what are the schools with the

lowest scores. The subsection below shows one example of the report cards.

The remaining three tools are dashboards for interactive analysis. We created three dashboards that allow

the study of key EMIS, Inspection and learning indicators on any location. These are (i) the Dashboard to

compare key EMIS, Inspection and Learning results by regions and woredas, (ii) the Dashboard to identify

bottom and top performers schools, woredas and zones in key EMIS, Inspection and Learning results by

regions and (iii) the Dashboard to relate EMIS, Inspection and Learning results.

The first dashboard produces the graphs we used in the section where we describe the key variables. This

dashboard allows the users to select the indicators they want to analyze and see the average result by region

and the separation of the results of the woredas inside each region.

The previous tool is useful to see whether the regions have woredas with very low performing scores where

they can target their attention. To complement this tool, we created a second dashboard that allows the user

to choose a region and identify what are the schools, woredas, or zones that are performing the best and the

worst. The Dashboard to identify bottom and top performers schools, woredas and zones in key EMIS,

Inspection and Learning results by regions, allows the users to select any location in the country, choose

the key indicators to analyze, and explore the top and bottom performers. The dashboard is flexible in terms

of the type of location to display: school, woreda or zone, and the number of top and bottom items to be

selected; from the bottom and top 5 to the bottom and top 100.

The final dashboard is the the Dashboard to relate EMIS, Inspection and Learning results. This dashboard

allows the user to choose the indicators he or she wants to compare and investigate their correlation. One

can choose to compare EMIS indicators vs. Inspection indicators, Inspection indicators vs. learning

indicators, and EMIS indicators vs learning indicators.

Page 64: Advancing analysis in the education Sector of Ethiopia

53

The subsections below provide images that show how these tools look like as well as more information

about how to use them.

Woreda, Zone and Region Report Cards

The images below show examples of the four pages of the report cards. The first page shows the overview

of the location as well as all its results in the key EMIS, Inspection and Learning indicators. The second

page, shows more in detail the location results of EMIS indicators. The third page, shows in detail the

location results in Inspection indicators. Finally, the third page shows in detail the location results on

National exams.

Page 65: Advancing analysis in the education Sector of Ethiopia

54

Page 66: Advancing analysis in the education Sector of Ethiopia

55

Page 67: Advancing analysis in the education Sector of Ethiopia

56

Page 68: Advancing analysis in the education Sector of Ethiopia

57

Page 69: Advancing analysis in the education Sector of Ethiopia

58

Dashboard to compare key EMIS, Inspection and Learning results by regions and woredas

Page 70: Advancing analysis in the education Sector of Ethiopia

59

Dashboard to identify bottom and top performers schools, woredas and zones in key EMIS,

Inspection and Learning results by regions

Page 71: Advancing analysis in the education Sector of Ethiopia

60

Dashboard to relate EMIS, Inspection and Learning results

VI. Conclusion and recommendations

To convert data into impact, the sector needs to expand the use of the data available. In this report we have

explored the main technical barriers that limit the use of data and we propose the creation of the data

platform as the key action to be undertaken in order to make more analysis a possibility. Although, we were

able to successfully merge the data sets, making easier the merging of information is a necessity that will

only be achieved if the entities collecting data coordinate their efforts. The integration of EMIS codes in all

the data sets is one of the main activities that the whole sector should continue adopting.

Page 72: Advancing analysis in the education Sector of Ethiopia

61

The joint analysis and the tools created during the production of this report are plausible examples of the

possibilities that integrated data sets offer. Indeed, our report provides important lessons that can help to

improve the usefulness of the variables collected. For instance, we conclude that there is an urgent need to

expand the availability of learning outcomes studies. On EMIS and Inspection indicators, the sector

produces data for all schools in the country. However, on learning, this is only true for secondary schools,

because in lower levels of education studies are sampled based. This limits the capacity to monitor and

understand the gaps in terms of learning –the most important indicator. Additionally, we offer important

lessons to improve the collection of EMIS, inspection and learning data. By relating Inspection and EMIS,

and Inspection and learning, we discover that the level of inspection of a school was a better predictor of

EMIS results than of learning. Finally, we created a couple of example tools that can help convert data into

action. Tools like the report cards or the interactive dashboards are easy ways to identify where and on what

a specific location needs to improve. Sharing useful analysis with lower levels of education like schools

and woredas can be one of the most effective ways to help teachers, principals, supervisors, parents and

education experts to improve education.

All these exemplify the uses and the benefit that a system that allows an easy integration and analysis of

the data collected offers. In order to continue advancing analysis in the education sector of Ethiopia, based

on the findings in this report, the team suggest the following recommendations:

Continue the creation of the data platform, a system that sets the guidelines for collection; that

reunites and integrates all the data produced in the sector; and that provides useful analysis to all

levels of education. This system should be accessible to all relevant actors in the sector (e.g.

directors, planning experts, etc.)

To allow the integration of data sets, continue adopting EMIS school codes in all the data collected

in the sector. There is an urgent need to include EMIS school codes in all the data produced by the

NEAEA, especially, on grade 8, grade 10 and grade 12 national examinations. Moreover, this

initiative should also include data sets that are not collected by Government institutions e.g. young

lives study, school mapping, etc.

Improve the EMIS data entry software to avoid any sort of duplications of EMIS school codes IDs.

Encourage more analysis of the data at hand. Both inside the institutions that collect the data and

outside them. Analysis should not only happen inside the directorates that are collecting the data

sets. Analysis should also occur in other directorates (e.g. PRMD, TDP), other levels of

administration (e.g. REBs, WEBs), and outside the Government institutions (e.g. universities,

researchers).

Page 73: Advancing analysis in the education Sector of Ethiopia

62

Perform analysis of the data available to identify problems in the quality of data. EMIS enrollment

data is of more quality because it is widely used. However, other indicators, like WASH facilities,

are of low quality or incomplete. Stretching the need of these variables for analysis and action will

improve the quality of these indicators.

Perform more analysis of the data available to identify those indicators that are not useful for action.

This will improve the data collection process by, for instance, improving the length of the

questionnaires.

Provide useful analysis to lower levels of education. This includes disaggregated analysis that

allows an easier identification of the main problems in a specific location, and that does not ignore

variations (e.g. region averages ignore variations inside regions).

Expand the collection of learning outcomes at primary levels. The sector produces EMIS and

inspection data for all schools in the country, but learning outcome data only for all secondary

schools. For schools in primary, they are only sample based studies that are not implemented every

year. It is important to expand the understanding of learning at lower levels of education.

The lack of correlation between inspection scores and learning outcomes suggest that the inspection

process is not capturing well the quality of schools. Two potential hypothesis that can explain the

lack of correlation are the subjectivity of some of the inspection standards, and/or the lack of

independence of the inspection process (the inspection directorate is not an independent body).

However, the reason why this is the case should be further investigated.

The report cards and the dashboards should be discussed, improved, and shared with relevant actors

in the sector. Moreover, these are only few examples of the things that can be made; more tools

that help transform data into action should be created.

The integration of the data, the creation of the data platform, the joint analysis and the tools we have created

set the foundations to continue boosting analysis and, therefore, converting data into impact in the education

sector of Ethiopia. One main caveat of our study is that the diagnosis and the solutions we present here are

purely technical. Even if we manage to create a system that integrates data and makes it ready for analysis,

we may be missing the main problems that are present in the political sphere. Following studies should

raise questions about the main political barriers that undermine the usage of data and how they could be

solved. We hope this report will offer supporting arguments for such a discussion.

Page 74: Advancing analysis in the education Sector of Ethiopia

61

VII. ANNEX

Information about inspection indices and standards

Table 11 Inspection indices and standards

Index Standard Inspection stadards description

school resources index

1 The school has fulfilled and is in line with the set standards for classroom and other buildings, facilities, pedagogical resources and implementing documents /4%/

2 The school has fulfilled financial resources to improve the teaching-learning process and execute its priority areas /4%/

3 The school has sufficient suitably qualified directors, teachers and other staff /4%/

4 The school has created a conducive learning-teaching environment which is safe, secure for the school community /4%/

school management index

5 The school has created a well-organized Education Development Army /3%/

6 The school has shared vision, mission and values /3%/

7 The school has prepared participatory school improvement plan /3%/

14 The school keeps record of data regarding female students and students with special needs and provides special support 2%/

16 The school leaders, teachers, students and support staff are working as a team organized in Development Army, /3%/

19 The school’s leadership and responsible bodies of various arrangements monitor whether or not the plans are implemented as per the required time, quality and quantity /2%/

20 The school has established and implemented a system for proper utilization of human, financial and material resources /2%/

21 The school has effective partnership with parents and the local community /2%/

student engagement index

8 Students’ learning and participation has increased 3%/

9 Students have made progress in their learning 3%/

10 Students show positive attitudes towards their school 2%/

teacher effectiveness index

11 Teaching is well planned, supported by the use of suitable resources and aimed at high educational results . /3%/

12 Teachers have adequate knowledge of the subject they teach /3%/

13 The leadership of the school and teachers have used appropriate and modern teaching methods that helped to increase the participation of all students’ /3%/

15 Teachers, directors and supervisors have undertaken continuous professional development (CPD) programme /2%/

17 Teachers evaluate, give feedback on whether the curriculum is meaningful, participatory and meets the development level and needs of students and improve it /2%/

Page 75: Advancing analysis in the education Sector of Ethiopia

62

18 The assessment of students’ performance is accurate; students are given appropriate feedback /3%/

intermediate outcomes index (a)

24 Students demonstrate responsible behaviour, ethical values, cultural understanding and protection of their environment /10%/

25 There is good communication and interaction among the school’s teachers, leaders and support staff; there is also a sense of accountability and fighting rent-seeking practices/6%/

26 The school secured support due the strong relation it has created with parents, local community and partner organizations /6%/

intermediate outcomes index (b)

22 The school has successfully met the national education access, internal efficiency and education sector development program goals /10%/

23 The students’ classroom, regional and national examination results have improved in relation to regional and national expectations. /8%/

Additional description of key variables

Figure 49 Average Input score by region and by woreda

Page 76: Advancing analysis in the education Sector of Ethiopia

63

Figure 50 Average Process score by region and by woreda

Figure 51 Average Output score by region and by woreda

Page 77: Advancing analysis in the education Sector of Ethiopia

64

Figure 52 Average School resources index score by region and by woreda

Figure 53 Average School management index score by region and by woreda

Page 78: Advancing analysis in the education Sector of Ethiopia

65

Figure 54 Average Students engagement index score by region and by woreda

Figure 55 Average Teacher effectiveness index score by region and by woreda

Page 79: Advancing analysis in the education Sector of Ethiopia

66

Figure 56 Average Intermediate outcome index (b) score by region and by woreda

Figure 57 Average Intermediate outcome index (b) score by region and by woreda

Page 80: Advancing analysis in the education Sector of Ethiopia

67

Figure 58 Average deliverology score by region and by school

Page 81: Advancing analysis in the education Sector of Ethiopia

68

Annex YY

Figure 59: Scatter plots between EMIS indicators & performance

Page 82: Advancing analysis in the education Sector of Ethiopia

69

Annex XX

Figure 60: Average Grade 10 examinations score versus school size

Figure 61: Average Grade 12 examinations score versus school size

Page 83: Advancing analysis in the education Sector of Ethiopia

70

Figure 62: Average student score in Mathematics (NLA) versus school size

Figure 63: Average student score in English (NLA) versus school size