measuring value added in higher education: a proposal

15
This article was downloaded by: [University of Arizona] On: 29 October 2014, At: 17:46 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Education Economics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/cede20 Measuring Value Added in Higher Education: A Proposal Tony Mallier a & Timothy Rodgers a a Coventry Business School , Coventry University , Priory Street, Coventry, CV1 5FB, UK Published online: 28 Jul 2006. To cite this article: Tony Mallier & Timothy Rodgers (1995) Measuring Value Added in Higher Education: A Proposal, Education Economics, 3:2, 119-132, DOI: 10.1080/09645299500000012 To link to this article: http://dx.doi.org/10.1080/09645299500000012 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/ page/terms-and-conditions

Upload: timothy

Post on 01-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Measuring Value Added in Higher Education: A Proposal

This article was downloaded by: [University of Arizona]On: 29 October 2014, At: 17:46Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Education EconomicsPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/cede20

Measuring Value Added in HigherEducation: A ProposalTony Mallier a & Timothy Rodgers aa Coventry Business School , Coventry University , PrioryStreet, Coventry, CV1 5FB, UKPublished online: 28 Jul 2006.

To cite this article: Tony Mallier & Timothy Rodgers (1995) Measuring Value Added in HigherEducation: A Proposal, Education Economics, 3:2, 119-132, DOI: 10.1080/09645299500000012

To link to this article: http://dx.doi.org/10.1080/09645299500000012

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information(the “Content”) contained in the publications on our platform. However, Taylor& Francis, our agents, and our licensors make no representations or warrantieswhatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions andviews of the authors, and are not the views of or endorsed by Taylor & Francis. Theaccuracy of the Content should not be relied upon and should be independentlyverified with primary sources of information. Taylor and Francis shall not be liablefor any losses, actions, claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arising directly or indirectly inconnection with, in relation to or arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden.Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: Measuring Value Added in Higher Education: A Proposal

Education Economics, Vol. 3, No. 2, 1995

Measuring Value Added in Higher Education: A Proposal

I TONY MALLIER & TIMOTHY RODGERS

ABSTRACT This paper proposes a measure of value added in higher education based on the earning differentials between graduates and non-graduates. A moneta y measurement of value added is calculated for each dzfferent class of degree. This enables us: firstly, to estimate the social rate of return for dzfferent degree classes, and, secondly, proposes a monetay-based pe$ormance indicator which could be used in the process of allocating resources in higher education.

I Introduction

After the prolonged period of higher education growth, public policy-makers are questioning the financial consequences of this growth and whether society receives a satisfactory return for the expenditure made. In order to measure performance, it is necessary to obtain some measurement of educational output. Fincher (1985), having noted that economists measured the value added in production processes by comparing input and output values, considered that such an approach could be applied to education. In a similar way, Taylor (1985) drew attention to the adoption of the value added approach as a basis for taxation, and expressed the opinion that if value added could be used to determine taxes the principle could also be applied to education. Fincher, Taylor and others have sought to explain and adapt the value added concept from essentially non-educational environments, where meaningful quantifiable variables exist, to education, where the onus has been upon qualitative change.

The principle of the value added approach as applied to education is simple: students enter with one set of academic qualifications and exit with an alternative and more valued set. Most of the attempts to measure value added in the UK have attempted to measure the distance the graduate has travelled academically, and have not sought to place a monetary value on the education. This has meant they have had to use some form of arbitrary weighting to measure performance which

T. Mallier and T. Rodgers, Coventry Business School, Coventry University, Priory Street, Coventry CV1 5FB, UK.

0964-5292/95/020119-14 01995 Journals Oxford Ltd

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 3: Measuring Value Added in Higher Education: A Proposal

120 T. Mallier & T. Rodgers

we feel is unsatisfactory. Within the field of economics, human capital theory has suggested that a monetary value can be placed on education outcomes (for example, Blaug, 1965), and we see no reason why similar principles cannot be extended to the measurement of value added. We believe that it is necessary to avoid the use of arbitrary weights in the measurement process, as was, for example, used in the Polytechnics and Colleges Funding CounciYCouncil for National Academic Awards (PCFCICNNA) pilot study (McGeevor et al., 1990), and we further believe that it should be possible to quantify in monetary terms the value added to each graduate.

The Story So Far

The criticism is often made that the public does not, in general, get 'value for money' from the public sector. This is because it is not subject to the market mechanism, and consequently there are no competitive forces to ensure that public goods and services are produced efficiently. The public sector as a whole has been subject to experiments aimed at increasing efficiency, which range from techniques such as the use of internal markets to the use of performance indicators in the resource allocation process. The higher education sector has been no exception to this process, with the government making an abortive attempt to introduce an internal market into the university sector in 1990 (Witzel, 1991). Universities were asked to bid competitively for student places and were given a 'guide price' to base bids around. However, rather than competing, the universities all tendered at the guide price and so the experiment was withdrawn. The major alternative to using the internal market to guide the resource allocation process has been seen as the use of performance indicators. This method itself, however, is not without problems, with the key issue being the measurement of output or performance itself. Given that education is a service rather than something that produces tangible goods, and given that the output has no market price, it is difficult to put a monetary valuation on what the higher education institutes produce. This problem is compounded by the fact that education output is multi-dimensional in nature, so any indicators used in the resource allocation mechanism must take account of all the relevant variables or the result can be resource mis-allocation. In this paper, we consider firstly the issues involved in the identification of relevant performance indicators and then go on to investigate the measurement problems associated with the use of one such indicator, i.e. value added.

The multi-dimensional nature of educational output has led to the parallel development of different types of performance indicators, and this has led to a debate as to which are more valid to use in the resource allocation process. These indicators may be split into two main categories: firstly, those used to determine whether higher education is of satisfactory 'quality' and, secondly, those used to determine whether it represents value for money. This then poses the question of which of these two types of performance indicators we use in the resource allocation process. The answer, presumably, lies in the agenda of the decision- makers. Within the academic community, the emphasis has been on the quality of the higher education institution. Differing approaches to determine the quality exist; for example, Halpern (1987) suggested:

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 4: Measuring Value Added in Higher Education: A Proposal

Measuring Value Added in Higher Education 1 2 1

We recognize superior universities by their research reputations and by the size and cost of their physical plants. By these criteria, only the large research institutions will be judged as excellent ... smaller colleges and universities . . . are doomed to mediocre standing.

I In the new, questioning environment, will the evaluation of higher education using

I the above criteria alone be acceptable for the allocation of resources? Will an approach which fails to recognize undergraduate teaching as a key activity be acceptable? Presumably, what our political policy-makers, and, one assumes,

1 potential students and employers, wish to know is whether the investment of time and money results in graduates with a satisfactory and relevant education which is valued outside the academic community.

Policy-makers, and other interest groups, wish to see performance indicators which will allow the effectiveness and quality of the teaching process to be evaluated. This coincides with the approach taken by Astin and Ayala (1987), who suggest that "true educational excellence is in the capacity to develop the talents of students".

This is the approach which we favour, and which was the approach taken in a pilot study undertaken by the PCFC in conjunction with the CNAA. This study attempted to identify the value added to students in the non-university sector of British higher education, with its results being reported by McGeevor et al. (1990). Two alternative approaches for estimating the value added to students who successfully completed higher education were examined. These were the use of indices and comparative value added (CVA). McGeevor et al. concluded that none of the results obtained from the use of indices were wholly satisfactory. This approach calculated a summary score for academic inputs and outputs, and, from these, calculated a measure of value added. The problem with this method was seen to be the lack of an 'objective' criterion for the measurement of inputs and outputs, and this, it was suggested, made the results rather arbitrary in nature. The CVA approach sought to overcome the problem of allocating arbitrary weights to inputs and outputs. It was claimed that this provided a 'level playing field' with which to make comparisons between institutions, courses andlor other subgroups.

CVA compares aggregated 'parent' data on student achievement, subdivided by entry qualification, with a subgroup. For example, it could compare the national results for economics degrees with the results from an individual institution. From this, it is then possible to compare the performance of the institution relative to the performance of the parent population. A funda- mental criticism we have of this approach is that it makes no attempt to measure directly the value added of the individual student, and therefore gives us no benchmark to measure value for money against. Secondly, as pointed out by Hadley and Winn (1992) the CVA approach is not as objective as it is claimed. It still requires arbitrary weightings to be assigned to degree classifications. The assumption is made, for example, that the value added to an individual who is expected to obtain a third class degree, but, in fact, obtains a lower second class, has the same value as that added as an individual who obtains a first class, rather than an expected upper second, classification. Experience suggests, how- ever, that such a comparison cannot be made. As the degree class rises, most academics would accept that obtaining a higher grade becomes progressively harder.

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 5: Measuring Value Added in Higher Education: A Proposal

122 T. Mallier C+ T. Rodgers

An Alternative Approach

Our attempt to measure value added is based on the premise found in neo-classical economics that the employees will be paid according to their marginal product. Therefore, individuals educated to graduate level will be paid more than those educated to A-level standard by virtue of their greater productivity resulting from the value added in higher education. This approach is not without its criticisms. There is a body of empirical evidence suggesting that factors other than relative productivity often explain pay differentials. For example, Frank (1984) suggests that the failure of employers to match marginal productivity and wages is because it is too complex and expensive to measure individual contributions to production. Wages are, therefore, based on simpler and less expensive criteria, and individuals accept the consequences of such administrative mechanisms. Frank believes that this is because employees are concerned primarily with their position within a wages hierarchy rather than the actual wages received. In spite of this criticism, we note that earnings data shows a strong relationship between education and earnings levels and so we believe that the basic principle is still applicable. There are also a number of other issues which we need to consider when using income data to construct a measurement of value added in higher education. Firstly, it should be noted that in using income as our unit of measurement, we may underestimate the value of the employee to the employer, and hence the true value added via the higher education process. This is because, except for the marginal employee, the value of the marginal product will be above the wage rate.

The second key issue which requires to be addressed is that of distinguishing between the consequences of value added due to education and the effects of subsequent on-the-job training. Human capital theory implies that pay differentials received by workers reflect the different levels of exposure to on-the-job training. This can be used to explain why the older the graduates and the A-level-educated employees get, the more they are rewarded as their productivity rises with experience. This has been challenged, however: for example, Medoff and Abraham (1981) find that performance plays a relatively small role in explaining pay levels. As both graduates and A-level employees get older, they will have received similar on-the-job training designed to increase their productivity and salary levels. If, therefore, the differential between these two groups is fairly constant over the individual's career, the difference may be assumed to reflect the initial difference arising from the original educational input. This is based on the fairly strong assumption that these two different groups will have similar, but not necessarily the same, training opportunities within their employment, although in practice this may not be the case, as graduates may have access to in-house training schemes from which non-graduates are barred.

If we wish to distinguish between value added due to on-the-job training and that associated with education, it will be necessary to eliminate the on-the-job element from the differential. The differential tends to increase over time, which we believe reflects the greater training opportunities available to graduates. We considered measuring value added in terms of two potential approaches. The first approach was to base the comparison between the graduate and A-level-educated employee when both groups have around 5 years of work experience. The income levels of those with this level of experience are likely to reflect the individual's productivity, rather than initial starting salaries. Starting salaries, however, will be a less accurate guide to productivity, given that these relate to what the employer

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 6: Measuring Value Added in Higher Education: A Proposal

Measuring Value Added in Higher Education 123

believes is the potential productivity rather than the actual level. The second method, which was camed out by way of comparison, was to compare the salary levels of the graduate and A-level employee upon entry to the labour market for the first time, although this suffers from the above-mentioned problem.

The Model

1 The model constructed takes account of the key points outlined above. In what follows, we give an outline of the model (explanations of which are given in the appendix) and discuss the main factors taken into account in its construction. We measure value added using two different data sets. These relate to where the employee (graduate and non-graduate) has in the region of 5 years of work experience and where there is no work experience at all.

The model is split into two parts. The first part identifies the average earnings a graduate achieving a particular class of degree will attain in relation to average graduate earnings. This enables us to measure the relative value of each graduate. The second part of the model is concerned with measuring the relative earnings of those who achieve graduate level or equivalent education and those who achieve an A-level or equivalent standard of education. The two elements are combined to calculate the value added for the individual graduate on an annual basis.

The value added for the individual graduate is calculated as follows:

Value added =

per year

Average income of graduate Average income of employee

with a given degree class educated to A-level standard (1)

1 Average income of graduate I Average income of graduate Average graduate

- with given degree class as a

with given class of degree income proportion of average graduate income (2)

The value added for the individual who participates in higher education is calculated in terms of the difference between the graduate's expected income and the expected income of someone educated to A-level standard. This is taken to represent the influence of higher education on the individual's salary level. The individual graduate's expected income depends on the degree class, with this being estimated as a proportion of the average graduate income.

Our measurement of the relationship between degree class and income esti- mates the proportion of average graduate income which an individual obtaining a particular class of degree could expect to receive. This measurement is based on the Department of Employment study of 1980 graduates (Dolton et al., 1990), which reveals a significant difference in earnings for different degree classes. This evidence is supported by the survey by Gordon (1983), which showed a clear preference by employers for graduates with a high degree class. Fundamental to the value added approach is the assumption that those who achieve higher degree

I grades will have had greater value added to them as they will be more productive. Marginal productivity theory suggests this will be reflected in the higher earnings for graduates who achieved higher classes. This assumption appears to be borne out by the evidence.

The Department of Employment survey data relate to the incomes of the 1980

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 7: Measuring Value Added in Higher Education: A Proposal

124 T. Mallier & T. Rodgers

Table 1. Earnings distribution and degree class by gender and institution

Earnings (1986 prices) Upper Lower

First second second Third Other

Males graduating from universities Less than 10 000 El0 000 but less than £20 000 E20 000 and over Mean for each degree class Sample size for each column

Females graduating from universities Less than £10 000 El0 000 but less than £20 000 E20 000 and over Mean for each degree class Sample size for each column

Males graduating from polytechnics Less than El0 000 El0 000 but less than £20 000 E20 000 and over Mean for each degree class Sample size for each column

Females graduating from polytechnics Less than £10 000 £10 000 but less than £20 000 E20 000 and over Mean for each degree class Sample size for each column

cohort of graduates in 1986. These graduates had work experience ranging between < 1 and > 6 years, with the average level of experience being in the region of 5 years. This suggests to us that the earnings differences based on degree class are not just a reflection of initial starting salaries, but are rather a reflection of differentials in productivity levels which will be maintained throughout the graduate's working life. We use the Dolton et al. data to calculate the salary level achieved by graduates from each class of degree as a proportion of the graduate average. The sample used in this respect consisted of 4541 graduates, and it revealed that, for a given class of degree, there were wide variations in earnings based on gender and higher educational institution. This is shown in Table 1, which is adapted from Dolton et al. (1990, p. 21).

In our calculations of equation (3) below, we derive an average figure which takes account of gender and educational institution using Dolton's 1986 data and also data on the average number of graduates in each degree class by gender and higher educational institution for the period 1981-1988 (data from Tarsh, 1990).

I This has enabled us to calculate the average income of each degree class relative to the overall average graduate income. The calculation of the average in each

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 8: Measuring Value Added in Higher Education: A Proposal

Measuring Value Added in Higher Education 1 2 5

degree class is weighted by the relative income level of each subcategory in the

I 1986 data. This is calculated as follows:

Average income of graduate I - -

Average income for class of degree with given degree class as a proportion of average Average graduate income for 1986

graduate income I I

Average graduate income - - Average 1986 income for

for 1986 1980 cohort graduate

I

i Number of 'graduate type' ' Average graduate with given class of degree Salary of this 'graduate type' income for class = 2 Total graduates with for this class of degree of degree given class of degree

I Graduate type = male university, female university, male polytechnic and female polytechnic.

From equation (3) the expected income achieved by each class of graduate as a proportion of average graduate income was derived (Table 2). These are the results we would anticipate, i.e. the higher the grade, the higher the relative income level. The premium associated with first class degrees is probably understated, due to the numbers of individuals attaining them who stay in education with the consequent lower salary levels.

The second part of the model considers the income of the graduate or equivalent relative to the income of an employee who achieved an A-level or equivalent education. The principal issue we need to address here is the problem of distinguishing between value added in education and value added on-the-job. We take two approaches for comparative purposes. The first approach, the results of which are shown in Table 3, is to consider the graduate and the A-level employee with a similar number of years of experience on the assumption that they have then received equivalent levels of on-the-job training (this was discussed above). The second approach is to consider the salary levels of those entering the labour market for the first time who therefore have no value added relating to job experience (i.e. the new graduate aged 21-22 years and the.18-year-old A-level employee).

The approach we take is to use data on employees with an assumed average of 3 years of experience. The General Household Survey (GHS; OPCS, 1986-1990)

Table 2. Expected graduate income by degree class relative to average graduate

income

Degree Expected class income

First class 1.2 Upper second 1.03 Lower second 0.97 Third 0.92 Passlother 0.83

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 9: Measuring Value Added in Higher Education: A Proposal

126 T. Mallier G.1 T. Rodgers

Table 3. Estimated value added to graduates by degree class, 1986-1991

Standard Value added Average, deviation,

Degree 1986- 1986- class 1991 1990 1989 1988 1987 1986 1991 1991

First 4430 4536 4367 4261 3946 3773 4219 27 1 Uppersecond 3808 3899 3753 3662 3389 3243 3625 234 Lowersecond 3588 3674 3537 3451 3193 3056 3415 220 Third 3405 3487 3356 3275 3030 2900 3242 209 Passlother 3112 3178 3068 2993 2770 2651 2962 189 Average 3669 3755 3616 3528 3266 3125 3494 225

Monetary value of the pay differential of the graduate employee over the A-level-educated employee, in 1986 constant prices, calculated from equations (1)-(7).

provides information on earnings according to highest educational achievement, subdivided by age and gender. Graduates in the age group 20-29 will have a maximum of 8 years of experience. The average will, however, be lower than the middle value, due to the numbers of these individuals who do not immediately enter employment and those who undertake postgraduate study. Employees edu- cated up to A-level standard who are in the age range 20-24 will have up to 6 years of experience, with the average being around 3 years. We make the assumption, therefore, that both groups will have approximately the same number of years of employment experience. We calculate average income figures as follows:

Average graduate income =

Average male Male graduate income as earnings a % of average earnings

% of graduates Female graduate earnings % of graduates ) + [as a YO of male earnings x who are female

Average A-level employee income =

A-level income as Average male % of average earnings earnings

% of graduates Female A-level earnings O/O of graduates who are male 1 + [as a YO of male earnings who are female

The income calculations that are used for both the average graduate and average A-level workforce have been derived from the GHS. This data are in the form of median levels of earnings, as opposed to mean values in the Department of Employment data. The data show significant differences between male and female earnings, and also between age groups. We consider that there is no justifiable reason why males and females with similar qualifications should be treated differently (the proportion of male and female graduates with each given class of degree is approximately the same for both groups). Consequently, it was decided to take the arithmetic average of the two groups weighted by their relative numbers to represent the average income levels for both groups. There was a small

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 10: Measuring Value Added in Higher Education: A Proposal

Measuring Value Added in Higher Education 127

fluctuation over time in both the relative proportions of male to female graduates and also female earnings as a proportion of male earnings. Therefore, the figures we have used are the arithmetic average over the period 1986-1 99 1.

The treatment of different age groups was rather more problematical. It would be possible to compare the 'all ages group' for graduate and A-level employees. This, however, would ignore the existence of on-the-job training and would

! attribute all the value added to the education process. This is clearly not the case. T o eliminate the on-the-job element of the value added, we have used data for employees with approximately 3 years of work experience (the graduates being, on

I average, 27-28 years of age and the A-level-educated employees approximately 22 years of age). We have assumed that the differential in income between the graduate and the A-level-educated employee is maintained over the employee's working life, but that only part of this difference is due to the value added to the graduate by higher education. The GHS shows that the differentials increase with age, and we put this down to the fact that the average graduate will have greater opportunities for on-the-job training than the average A-level-educated employee. T o account for this, we take the differential for an age group with a limited amount of work experience and take this as our estimate of the differential throughout the graduate's working life. This means that not only can we produce an estimate of value added by the single year, but we can also extrapolate this over the graduate's working life.

We sought to estimate the value added by higher education, using firstly the data set as outlined above. Secondly, for comparative purposes, we estimated value added when there was no prior work experience. We concluded, however, that the data used for the second estimate were unreliable, which we believe was mainly due to inadequacies in the data used for A-level-educated employees without work experience. Consequently, we do not report the results here, although we would hope to produce reliable estimates at a later date.

I The Results

Table 3 shows the estimated value added to individuals who graduated and have an average of 3 years of employment experience, and where the average A-level- educated employee is also considered to have 3 years of experience. The series are calculated for periods from 1986 to 1991 and are shown in 1986 prices. The increase over time is due mainly to increases in the male average earnings over this time period.

Analysis of Results

In practice, there is a significant variation around the results shown in Table 3. The annual variation within each group is due principally to variations in the level of real average earnings over the period in question. The standard deviation across each class for the period 1986-1991 is, however, relatively small, with a coefficient of variation of approximately 6% for each class. Given that there appears to be little annual variation, we believe it is appropriate, therefore, to use the 5-year average.

While the model that was developed, and the results obtained, reflect the aggregate situation, value added exercises are often presented for specific sub- groups. When this occurs, there are other considerations which have not been

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 11: Measuring Value Added in Higher Education: A Proposal

128 T. Mallier & T. Rodgers

addressed in our model, for example gender, academic discipline studied and the nature of the institution where the study was undertaken. In numerous instances, these considerations are interrelated; choice of academic discipline studied is not random but is often gender-related, certain 'vocational' courses are more likely to be found in the new, rather than in the old, universities or vice versa. The available survey figures (Dolton et al., 1990) reveal average income figures for each class of degree, and these mask major differences in salaries in terms of gender, academic discipline and type of institution.

With reference to gender, it was found that female graduates with qualifications equivalent to their male colleagues received significantly lower salaries. For exam- ple, in 1986, male graduates from a university with a first class degree could anticipate an average pay differential over the A-level employee of E3960, which was £365 higher than the average salary paid to a female first class graduate. Did this arise because the females had received less value added or were less produc- tive? We believe that the answer is no. The difference in salaries is a reflection of both the academic disciplines followed by females and the nature of the employ- ment they sought after graduation. In examining the issue of gender, it has been observed that a more than proportionate number of female graduates are found in the discipline areas of education and the arts and languages. While the overall proportion of male to female graduates is about 58% to 42% in education and languages respectively, the ratio of female to male graduates are approximately 2.5:l and 2:l (Tarsh, 1990). The disciplines studied by females lead to 'crowding' in certain occupations, with a corresponding depression of salaries, which is often reinforced by employment in the public sector, for example school-teaching. The result is that average female graduate salaries are consistently lower than those received by their male counterparts.

There are significant differences in the number of first class degrees awarded between different discipline areas (Tarsh, 1988), and this will be reflected in the different salary levels received by graduates. For example, where an academic discipline has an above average number of first class graduates, the average graduate starting salaries in that discipline area will be above the average. There are major differences between academic disciplines in terms of salary levels and the distribution of salaries. However, these salary differences may only be partially attributed to the differing distribution of awards. When vocational degrees, such as medicine and education, are compared, there are wide disparities in the salaries the two groups receive, disparities which cannot be explained in terms of the cost and length of study. Similarly, graduates of commerce-type degrees had mean salaries one-third higher than those received by arts and language graduates, and there is a much wider spread in the distribution as represented by the standard deviation. Likewise, graduates from the former polytechnic sector (both male and female) with a given class of degree consistently received significantly lower salaries than their university counterparts. A male polytechnic graduate with a first class honours degree awarded in 1986 could expect to receive a salary 10% below his university counterpart.

These differences related to gender, academic discipline and type of institution lead to further questions when considering how to 'value' the value added. Is there less value added to a female receiving a first class degree than to a male student who enters the same institution with the same entry qualification and also receives a first class degree in the same discipline? We suspect not. The value added will be the same, but the labour market places different values on a given level of value

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 12: Measuring Value Added in Higher Education: A Proposal

Measuring Value Added in Higher Education 129

Table 4. SIRR of gradu- ates by degree class

SIRR for Degree class graduates

First 8.5 Upper second 7.5 Lower second 7.0 Third 6.5 Passlother 6.0

added to different categories of graduate. We believe that it is incorrect in the context of this paper to discriminate between graduates on the basis of gender, academic discipline or institution, as these are more the result of the vagaries of the labour market than productivity. Account is taken of these differences in our measurement of the 'average' graduate income for each degree class. The differ- ences between malelfemale and polytechnicluniversity are taken into account by using the survey data for these individual categories, and then creating a weighted average based on the numbers of graduates produced on the national level in each category (this is based on a 1981-1989 average). The survey data are based on a sample across all subject areas, which is representative of national figures in terms of the numbers of graduates from each subject area. This should, therefore, be adequate for our measurement in terms of the income of the average graduate within each classification of degree. We would have liked to have been able to have made some comparisons between our estimate and others based on graduates with different levels of experience, and this is a potential area where the analysis could be extended.

Value for Money?

The Social Internal Rate of Return (SIRR) is a measurement of return on investment, and can be used for measuring whether the taxpayer is getting 'value for money'. Psacharopoulos (1973) suggested that higher education produced a SIRR of 8% in the 1960s, and, in a 1985 paper, suggested a long run rate of around 7%. At the time, this was seen as exceeding the then Treasury Test Discount Rate of 5%. The SIRR value obtained will clearly depend upon the period over which education is deemed to effect skills and earning capacity. In calculating the SIRR, we make the assumption that the differential between graduate earnings and A-level employee earnings will be constant over the individ- ual's working life. We further assume that the differential in earnings of graduates with different classes of degree will be constant over their working lives, and that the individual will retire at the age of 60. The average value added over the 5 years 1985-1991 (see Table 3) is used in our calculations.

The cost of education is taken as being the average government expenditure per student per year multiplied by the average number of years spent in education plus the estimated income of the student made up of grant, parental loan or govern- ment loan (see the appendix for details). The results in Table 4 are shown in constant prices.

These results can be compared with those of Psacharopoulos (1973, 1985),

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 13: Measuring Value Added in Higher Education: A Proposal

130 T. Mallier & T. Rodgers

and would seem to concur with the view of Johnes (1993) that, despite the large increase in graduate numbers in the intervening years, the rate of return after eliminating cyclical variations appears remarkably stable over time. It should be noted that the rate of return does fluctuate with the economic cycle. However, the data used in our analysis relate to a period towards the middle of the cycle, and should, therefore, be fairly neutral in this respect.

Conclusions

The increasing cost of higher education within the total of public spending has 1 made it inevitable that public policy-makers would seek assurances concerning the benefits arising from this expenditure. The value added approach is regarded as an appropriate performance indicator to meet the policy-makers' desires. We do not question the potential the value added approach offers: our concern is that the 'values' used in estimating value added should be realistic and based on the available evidence. The PCFCICNAA study (McGeevor et al., 1990, p. 1) initially proposes the following scores for use in their index model:

Degree results are scored:

Pass = 2 Third = 4

Lower second = 6 Upper second = 8

First = 10

Subsequently, they suggested an alternative set of weights when developing the CVA approach (1990, p. 1 I), namely:

The scoring system developed in this project produces a value added score of + 1 for a course where every graduate achieved one degree class higher than expected; conversely, a score of - 1 would indicate a course where every graduate achieved one degree class less than expected.

This approach leads to a constant value for movements between classes; a student gaining a lower second degree rather than the anticipated third represents + 1, a student who fails to achieve the expected first but receives a upper second degree represents - 1. We believe, however, that our approach provides a realistic reply to the criticisms of McGeevor et al. 's index models, given that the values we derive are determined in relation to income levels, and therefore the differing levels of graduate productivity resulting from their exposure to higher education, rather than being arbitrarily set. We believe the results presented in Table 3 provide a realistic basis on which to determine relevant weights. From the monetary values so derived, we suggest the following weights attributable to different degree classes (see the appendix for details):

Pass = 2.00 Third = 2.20

Lower second = 2.30 Upper second = 2.45

First = 2.85

The relationship in the 'values', based on graduate salaries attributed to differing degree classes, are of a different order of magnitude to those used in the

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 14: Measuring Value Added in Higher Education: A Proposal

Measuring Value Added in Higher Education 13 1

PCFCICNAA paper. The implications of these latter values on the subsequent estimates of value added will be considerable.

Further, the weights derived from Table 3 indicate the positive value, based on graduate salaries, of moving from a third to a lower second is of the order of + 0.05, while the value added loss incurred by a student failing to obtain an expected first would be - 0.10. By reference to graduate salaries, it is observed that employers place significantly different values on the relationship between degree classes than the one suggested in the academic community. We believe that our approach has other significant advantages over the CVA method. We actually seek to measure the value added, as recorded in the labour market, to a graduate, unlike CVA which is only able to undertake comparisons between similar groups. The CVA approach is unable to say whether the taxpayer is getting value for money from higher education as a whole. An institution which is shown to be below average using CVA may lose funds if this approach is used in the resource allocation process, but it may perform perfectly adequately using our measure- ment.

By way of a final conclusion, we note that 'value' has always been something which economists have had difficulty in both defining and measuring. T o the classical economists, such as Smith, Richardo and Marx, it was something of the philosopher's stone-something which they all thought should be measured, but something which they all had great difficulty in actually measuring. Difficulties similar to those found by the classical economists are very much in evidence in the work we present above. It is impossible to have an exact measurement of value added in education because of the large number of variables involved and the difficulties of definition we have outlined above. We contend, however, that the approach developed in this paper does provide a useful benchmark as a perform- ance indicator in the resource allocation process. It is better to have an imperfect or 'second best' measure than none at all.

References I

Astin, W. & Ayala, F. (1987) Institutional strategies: a consortia1 approach to assessment, Educational Record, 68, pp. 17-51.

Blaug, M. (1965) The rate of return on investment in education in Great Britain, Manchester Schools of Economics and Social Studies, XXXIII, pp. 205-25 1.

Central Statistical Office (1989) Social Trends 19 (London, HMSO). Central Statistical Office (1994) Social Trends 24 (London, HMSO). Committee of Vice-Chancellors and Principals of Universities in the United Kingdom (1987)

University Management Statistics and Perj%ormance Indicators: U K Universities (London, CVCP). Dolton, P. J., Makepeace, G. H. & Inchley, G. D. (1990) The Early Careers of 1980 Graduates

(London, Department of Employment Research, paper no. 78). Fincher, C. (1985) What is value added education? Research in Higher Education, 22, pp. 395-

398. Frank, R. F. (1984) Are workers paid their marginal product? American Economic Review, 74,

pp. 519-571. Ghosh, D. & Mallier, T. (1992) Student finance: the contribution of the long vacation, Higher

Education Review, 25, pp. 45-56. Gordon, A. (1983) Attitudes of employers to the recruitment of graduates, Educational Studies, 9,

pp. 45-64. Hadley, T. & Winn, S. (1992) Measuring value added at course level: an exploratory study,

Higher Education Review, 25, pp. 7-30. Halpern, D. F. (1987) Student outcome assessment: introduction and overview, New Directions

for Higher Education, 15, pp. 5-8. Johnes, G. (1 993) The Economics of Education (London, Macmillan).

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4

Page 15: Measuring Value Added in Higher Education: A Proposal

132 T. Mallier & T. Rodgers

McGeevor, P., Giles, C., Little, B., Head, P. & Brennen, J. (1990) The Measurement of Value Added in Higher Education, a joint PCFCICNAA project (London, CNAA).

Medoff, J. L. & Abraham, K. G. (1981) Are those paid more really more productive? The case of experience, Journal of Human Resources, 16, pp. 186-2 16.

OPCS Social Survey Division (1986-1990) General HousehoM Survey (London, HMSO). Psacharopoulos, G. (1973) Returns to Education (Amsterdam, Elsevier). Psacharopoulos, G. (1985) Returns to education: a further international update and implications,

Journal of Human Resources, 20, pp. 585-604. Tarsh, J. (1988) New graduate destinations and degree class, Employment Gazette, 96, pp.

394413. Tarsh, J. (1990) Graduate employment and degree class, Employment Gazette, 98, pp. 489-500. Taylor, T. (1985) A value added assessment model: Northeast Missouri State University,

Assessment and Evaluation in Higher Education, 10, pp. 190-202. Witzel, M. L. (1991) The failure of an internal market: the Universities Funding Council bid

system, Public Money and Management, 11, pp. 4 1 4 8 .

Appendix

The following notes relate to the corresponding equations in the model.

Equation (1) . The value added to individual graduates per year is defined in terms of the additional income they can expect to earn over and above that they would earn if their education ceased at A-levels. We differentiate graduate earnings in terms of class of degree. We make the assumption that, for those entering employment at A-level standard or equivalent (defined as one or more A-levels, or one or more SCE Highers or ONClONDs), the A-level grades do not effect earnings capacity. When calculating average income levels for the graduate and A-level groups, we ensure that both groups have the same level of work experience.

Equation (2)-equation (5). The average income of a graduate with a given degree class as a proportion of average graduate income is calculated from the data provided in Dolton et al. (1990). This is cross-sectional data relating to 1980 cohort graduates in 1986. The average graduate income values are taken directly from this source. There is a clear relationship between degree class and income, but also shown are significant differences along the lines of gender and higher education institution. The average value for each class is therefore calculated by reference to the Dolton figures weighted by the number of graduates in each category. For this, we have taken the average numbers of graduates in each category for the period 1981-1989 in order to eliminate any year-on-year variation.

Equation (2) and equation (6). The average graduate income is calculated in relation to male average earnings. In equation (6), we consider graduates with approximately 3 years of work experience. Here, average male graduate earnings are adjusted to take account of the lower female graduate earnings levels. The weight of female earnings in relation to male earnings is taken as the average value between 1986 and 1990. Male graduate earnings as a percentage of average earnings is a fairly stable figure if we look at the results of the GHS for 1986-1990, the variance in the figure is minimal. We also take an average figure in this case. These averages have been taken to introduce an element of stability into the estimates.

Equation (2) and equation (7). The average income for the A-level-educated employee is determined in equation (7). In this equation, the average is calculated in relation to average male earnings and adjusted to take account of lower female earnings. As in the above case, average values are taken over the period 1986-1990. The weightings given to male and female earnings are the same as those given to graduates, in order to ensure that a direct comparison is made.

SIRR. The value of the SIRR depends on the assumptions we make in respect to the annual value added figures and also the cash outflows. We have based our estimate of the cost of higher education in 1986 on student grant plus loan, funds per place the universitylpolytechnic receives and the earnings foregone of the 18-year-old A-level-educated employee. The employee is assumed to work up to the age of 60, and, for the annual value added, we make the assumption that this will be a constant over the graduate's working life. The monetary values are all in 1986 prices. This uses data from the Central Statistical Office (1989, 1994), Committee of Vice- Chancellors and Principals (1987) and Ghosh and Mallier (1992).

Weights of degree categories. The weights given to each category of degree in the conclusion are based on the average monetary values calculated in Table 3. The passlother category is given a weight of 2, and the other categories are calculated relative to the monetary values in Table 3.

Dow

nloa

ded

by [

Uni

vers

ity o

f A

rizo

na]

at 1

7:46

29

Oct

ober

201

4