5. lee harvey 2008

16
"Assaying improvement" Lee Harvey Abstract The presentation will examine the extent and nature of improvement in higher education: the purity of the enhanced silver. The interactive presentation will focus on transformative learning: on the learner and situated learning rather than abstract learning styles or teaching techniques and link that to quality enhancement. Questions will be asked about impact of improvement processes and a suggestion made as to how 20 years of quality assurance can be evaluated. The presentation will also critique rankings, arguing that they fail to encourage fundamental engagement with transformative learning. Introduction Assaying is defined as ‘the act or process of testing, especially of analysing or examining metals and ores, to determine the proportion of pure metal’. The presentation will examine the extent and nature of improvement in higher education: the purity of the enhanced silver. Currently, there is a lot of discussion about enhancement and much good practice goes on ranging from classroom initiatives right through to system-wide quality assurance processes. But are we really improving anything or are we just appearing to? Development of higher education Let me suggest three views on the development of higher education. First, a positive view that says higher education has been confronted by a period of rapid change, probably the biggest upheaval in its history. The change is the result of external forces, including the need for a more educated workforce and mobility of workers, which has led to massification of higher education and consequent reductions in funding per student, more diversity in the system and even a change in status of those working in the sector. Through all of this, higher education has adapted and delivered and continues to deliver ever more research and innovative teaching. Institutions have taken on more and more responsibility. One key element in all this is quality assurance in higher education, which, although often involving additional work and costing significant amounts of money, has resulted in a more transparent education system and had an overall positive impact. Indeed, without quality assurance the whole higher education edifice, confronted by market forces, would have been in danger of crumbling. Second, a rather more cynical view suggests that massification has not broadened access. Further, there has been an overall deterioration in higher education. In practice, the service offered to students has declined, with far less one-to-one contact, fewer teaching

Upload: federico-renda

Post on 15-Feb-2016

217 views

Category:

Documents


0 download

DESCRIPTION

5. Lee harvey 2008

TRANSCRIPT

Page 1: 5. Lee harvey 2008

"Assaying improvement" Lee Harvey Abstract The presentation will examine the extent and nature of improvement in higher education: the purity of the enhanced silver. The interactive presentation will focus on transformative learning: on the learner and situated learning rather than abstract learning styles or teaching techniques and link that to quality enhancement. Questions will be asked about impact of improvement processes and a suggestion made as to how 20 years of quality assurance can be evaluated. The presentation will also critique rankings, arguing that they fail to encourage fundamental engagement with transformative learning. Introduction Assaying is defined as ‘the act or process of testing, especially of analysing or examining metals and ores, to determine the proportion of pure metal’. The presentation will examine the extent and nature of improvement in higher education: the purity of the enhanced silver. Currently, there is a lot of discussion about enhancement and much good practice goes on ranging from classroom initiatives right through to system-wide quality assurance processes. But are we really improving anything or are we just appearing to? Development of higher education Let me suggest three views on the development of higher education. First, a positive view that says higher education has been confronted by a period of rapid change, probably the biggest upheaval in its history. The change is the result of external forces, including the need for a more educated workforce and mobility of workers, which has led to massification of higher education and consequent reductions in funding per student, more diversity in the system and even a change in status of those working in the sector. Through all of this, higher education has adapted and delivered and continues to deliver ever more research and innovative teaching. Institutions have taken on more and more responsibility. One key element in all this is quality assurance in higher education, which, although often involving additional work and costing significant amounts of money, has resulted in a more transparent education system and had an overall positive impact. Indeed, without quality assurance the whole higher education edifice, confronted by market forces, would have been in danger of crumbling. Second, a rather more cynical view suggests that massification has not broadened access. Further, there has been an overall deterioration in higher education. In practice, the service offered to students has declined, with far less one-to-one contact, fewer teaching

mpita
But are we really improving anything or are we just appearing to?
Page 2: 5. Lee harvey 2008

hours, less time spent on providing valuable feedback, large alienating classes, resulting in a lack of a feeling of belonging and consequent high drop-out rates, especially in the first year of undergraduate study. Students have a more fragmented experience of peers because of semesterised, modularised, ‘cafeteria’ choice systems and the need for students to spend increasing amounts of time working for money. So-called student-centred, autonomous learning is just an excuse for not teaching. Increasingly, staff spend more time on research, producing more and more papers of less and less worth because of pressures to publish, that have resulted in a plethora of mediocre journals. Increasingly, students are taking an instrumentalist approach and institutions are concerned with employment of graduates rather than developing graduates steeped in their discipline who want to learn for learning’s sake. Managerialism has run rife in higher education, with ‘professional’ managers, disengaged from academic practice, adopting aggressive tactics. Human resource, accounting and marketing departments run universities. Academic freedom is under attack. Quality assurance conceals what is really happening, it is a flimsy charade pretending that all is well. Institutional managers are concerned with their own careers and are more preoccupied with league tables and the other trappings of a competitive environment than with the academic development going on in their institution. A third, middle way suggests that access has improved, albeit not as much as may be desired but you can’t force people to go into higher education. Some aspects of the learning experience have changed, class contact time, for example, but as this was predominantly lectures, which are not good sites of learning, a more student-centred approach has encouraged learner autonomy. Teaching and learning techniques have improved considerably and modern technology means that information is readily available and that contact does not need to be face-to-face. Furthermore, what is expected of students and what students can expect are more transparent than twenty years ago and cross-fertilisation of ideas in a diverse and porous system is (potentially) greater. Professional managers are necessary in a fiercely competitive world and higher education can no longer operate with the inept bumbling academic leadership of the past. Nonetheless, the rumour of the demise of the power of academics is premature. Indeed, in most situations, institutions have more autonomy, tied to quality assurance. The rise of quality assurance has been an inevitable consequence of the changes faced by higher education. The effectiveness of quality assurance depends on its purposes and approaches and is a function of the level of motivation and engagement of all stakeholders. It is tempting to ask you which one best characterises your position (bearing in mind they are caricatures). Improvement/enhancement So what are we improving? The term enhancement has come to the fore but while this might be the same as improvement it might also imply modifying appearances: hence ‘shining the silver’, the title of this Forum. If we are improving, what are we doing in practice to improve teaching and research? Are we doing more than making them appear better? How do we know?

mpita
So what are we improving? The term enhancement has come to the fore but while this might be the same as improvement it might also imply modifying appearances: hence ‘shining the silver’, the title of this Forum. If we are improving, what are we doing in practice to improve teaching and research? Are we doing more than making them appear better? How do we know?
Page 3: 5. Lee harvey 2008

Quality assurance is, allegedly, a major tool for improvement. So what has quality assurance in higher education achieved? The impact of quality assurance on the quality of higher education qualifications Has nearly twenty years of quality assurance had any impact on the quality of degree programmes? Of course, in some countries, quality assurance, albeit by other names, has been around for a lot longer than 20 years. For example, the external examiner scheme has been in place since the university system in the UK began to expand more than a hundred years ago with the creation of the new civic universities. Other countries adopted an external examiner system well in advance of the ‘quality revolution’ that gathered momentum in the late 1980s. Also, in the UK, professional accreditation (of programmes) is long established in some areas, such as medicine, and is, in such cases, encapsulated in regulatory legislation In the United States regional and professional accreditation goes back to the start of the last century. Although it is hard to disentangle pre-existing arrangements from more recent quality mechanisms and processes, the question remains to be answered as to what impact the quality assurance activity of the last twenty years has had? Impact is a difficult concept in itself. If it is meant to imply a simple cause-and-effect relationship then it is difficult to say that any perceived change in the quality of degrees is attributable to a quality assurance cause. There are direct impacts of quality assurance processes, such as the generation of course documents, audit reports and the like but such document production does not necessarily and unambiguously translate into changes (hopefully improvements) in degree quality. An alternative ‘permeable layer’ approach suggests that activities of quality assurance agencies, combined with internal developments in institutions, situated in a broader market or governmental context, combine to change activities, perspectives and attitudes. Implementation of international, national, institutional or departmental policies occurs in ways unintended or unanticipated by those who generated the policy. A further complication when exploring impact on ‘quality of degree’ is whether this refers to some abstract ‘gold standard’ notion of what an ideal-type degree should be or whether it relates to the student learning experience. There are those who would argue that, despite quality assurance, a degree is not what it used to be. Indeed, when a tiny proportion of (privileged) people attended university, the degree was exclusive and quite different from today. Whether the exclusivity made it any better (other than more marketable) is a moot point. The suggestion by advocates of a golden age is that when it was exclusive, the quality of an awarded degree was ‘higher’, implying that students did more in order to successfully achieve an award. This is highly contentious. The current

mpita
Quality assurance is, allegedly, a major tool for improvement. So what has quality assurance in higher education achieved?
mpita
Has nearly twenty years of quality assurance had any impact on the quality of degree programmes?
mpita
Impact is a difficult concept in itself. If it is meant to imply a simple cause-and-effect relationship then it is difficult to say that any perceived change in the quality of degrees is attributable to a quality assurance cause. There are direct impacts of quality assurance processes, such as the generation of course documents, audit reports and the like but such document production does not necessarily and unambiguously translate into changes (hopefully improvements) in degree quality.
Page 4: 5. Lee harvey 2008

vice-chancellor of the University of Surrey, for example, noted recently: I believe it can be argued that the quality of the corresponding top 13% (that is 13% of 43% [of the cohort going into higher education in England]) of graduates in the HE sector is at least as strong as it was for the full cohort in the 1970s. Many senior managers and government leaders experienced a more elite education system and may not fully appreciate the balance between quality and diversity we see today. The vast majority of the additional 30% who gain a HE experience today will benefit, raising the overall educational attainment of the nation's workforce significantly. (Snowden, 2008)

Nonetheless, the claims of ‘dumbing down’, correlated with ‘grade inflation’, resulting in a higher proportion of first and upper second-class degrees in the UK and similar phenomena in the US and elsewhere, does raise questions about comparable standards. However, standards of outcomes and quality of process are not the same thing, Recently, in the UK, there has been a revived furore about standards sparked by comments from an academic that students are being given higher grades than they deserve because of the pressure of published ranking tables (Alderman and Brown, 2008; Shepherd, 2008; Lipsett, 2008a). This coincided with a call by the Quality Assurance Agency (QAA) for a closer look at the way degrees are assessed. The QAA study found that academic standards might be placed at risk as a consequence of weaknesses in how assessment boards work (Lipsett, 2008b). As usual, the British media took most of this out of context (Williams, 2008) and we had another example of the UK shooting itself in the foot over higher education. Employers are always quick to chime in and the usual whinges about graduates not being ‘fit for purpose’ (such as the how few computer games graduates are up to industry standards (Lipsett, 2008c)) have been augmented by concerns about inconsistency in the standard of final degree classifications (Asthana, 2008). Nonetheless employers continue to employ graduates and predominantly are happy with those they do employ. Moving on from a ‘gold standard’ approach, there is a similar dilemma if one focuses on the student learning experience. There is some evidence in support of the cynical view about alienation of students and poor face-to-face contact with staff. However, teaching and learning techniques have improved (irrespective of the role of quality assurance) and there has been a change in the role of the teacher and in student-teacher communication (albeit not always welcome by ‘traditional’ lecturers). Having said this, and despite two decades of quality assurance, there is remarkably little substantive research on the impact of quality assurance in higher education. It is not clear, apart from the costs, why there is a paucity of other than anecdotal evidence. Maybe the exploration of impact relies too much on positivist approaches seeking naïve cause and effect. Phenomenological or dialectical approach seem likely to bear more fruit, as for example, in the critical close-up studies of academic’s engagement with quality issues (Newton, 2000). Space precludes a detailed analysis of the available research on impact and reference will be made to two fairly recent analyses, one deriving from the quality assurance agencies themselves and another from a recent examination by Bjorn Stensaker at the European Universities Association Quality Forum.

mpita
Having said this, and despite two decades of quality assurance, there is remarkably little substantive research on the impact of quality assurance in higher education. It is not clear, apart from the costs, why there is a paucity of other than anecdotal evidence. Maybe the exploration of impact relies too much on positivist approaches seeking naïve cause and effect. Phenomenological or dialectical approach seem likely to bear more fruit, as for example, in the critical close-up studies of academic’s engagement with quality issues (Newton, 2000). Space precludes a detailed analysis of the available research on impact and reference will be made to two fairly recent analyses, one deriving from the quality assurance agencies themselves and another from a recent examination by Bjorn Stensaker at the European Universities Association Quality Forum.
Page 5: 5. Lee harvey 2008

In a conference a couple of years back, under the auspices of the International Network of Quality Assurance Agencies in Higher Education, quality agency delegates maintained that, despite there being no simple causal link, external quality assurance had a significant impact, including on the teaching and learning situation (Harvey, 2006). The agencies identified several aspects: external quality assurance placed a requirement on institutions to take responsibility for students enrolled, reflected in a growing concern over attrition. There have been demonstrable curriculum adjustments and the growth of course evaluations, appeals and complaints procedures. In addition, agencies claim, standards have improved and there are plenty of examples of better ways of teaching. Quality assurance ‘legitimises the discussion of teaching’; it is no longer acceptable to regard teaching as a private domain. Stensaker (2007, p. 60) also eschewed a simple direct impact of quality assurance and argued that the impacts of quality can be interpreted quite differently depending on the point of departure. He identified four areas of impact: power, professionalism, permeability and public relations. On the first, he noted that quality processes support the development of a stronger institutional leadership in higher education evidenced through increasing centralisation of information and much clearer lines of responsibility albeit removing some of the past responsibilities of the individual academic. Second, quality work has become more professional, with ‘written routines, scripts and rule-driven handbooks providing hints of when to do what, and the persons in charge’. Some just see this as increased bureaucracy while others regard it as making ‘tacit knowledge’ transparent. Third, Stensaker argued that quality assurance has led to a proliferation of information and that we probably know more about higher education than ever before. Fourth, on the PR front, ‘quality processes are used as a marketing and branding tool’. He also argued that quality processes are ‘of assistance as a way to defend the sector against the many poorly developed, unfair or unbalanced ranking and performance indicators systems which these days sweep over the world’ (Stensaker, 2007, p. 61). I’ll come back to rankings. So, quality assurance has, arguably, had an impact. Whether it has enhanced the student experience of learning is still unclear. Let’s take another tack on this. Transformative learning In the early 1990s, following the analysis of quality as a concept and rooted in my critical social research methodological heritage (Harvey, 1990), I was hooked on the idea of learning as a transformative process and wrote various things culminating in the publication, in 1996, of Transforming Higher Education, co-written with Peter Knight, who sadly is no longer with us (Harvey and Knight, 1996). Peter and I had planned to update the 1996 volume, a decade on, and my last communication with him was to postpone a meeting to initiate the rewrite. For me, the principles of transformative learning in that book have guided and underpinned all the work I’ve done on quality, employability and student feedback.

mpita
So, quality assurance has, arguably, had an impact. Whether it has enhanced the student experience of learning is still unclear.
Page 6: 5. Lee harvey 2008

Transformative learning starts from the premise that the student is a participant in an educative process. Students are not products, customers, consumers, service users or clients. Education is not a service for a customer (much less a product to be consumed) but an ongoing process of transformation of the participant. There are two elements of transformative quality in education, enhancing the participant and empowering the participant. A quality education is one that effects changes in the participants and, thereby, enhances them. A high-quality institution would be one that greatly enhances its students (Astin, 1990). However, enhancement is not itself transformative, any more than shining the silver transforms the substance of the silver object. The second element of transformative quality is empowerment. Empowering students involves giving power to participants to influence their own transformation. It involves students taking ownership of the learning process. Furthermore, the transformation process itself provides the opportunity for self-empowerment, through increased confidence and self-awareness. In essence, empowerment is about developing students’ critical ability, that is, their ability to think and act in a way that transcends taken-for-granted preconceptions, prejudices and frames of reference. Critical thinking is not to be confused with ‘criticism’, especially the common-sense notion of negative criticism. Developing critical thinking involves getting students to question the established orthodoxy and learn to justify their opinions. Students are encouraged to think about knowledge as a process in which they are engaged, not some ‘thing’ they tentatively approach and selectively appropriate. Developing critical ability is about students having the confidence to assess and develop knowledge for themselves rather than submitting packaged chunks to an assessor who will tell them if it sufficient or ‘correct’. A critical ability enables students to self-assess, to be able to decide what is good quality work and to be confident when they have achieved it. In short, an approach that encourages critical ability treats students as intellectual performers rather than as compliant audience. It transforms teaching and learning into an active process of coming to understand. It enables students to easily go beyond the narrow confines of the ‘safe’ knowledge base of their academic discipline to applying themselves to whatever they encounter in the post-education world. This is empowering for life and requires an approach to teaching and learning that goes beyond requiring students to learn a body of knowledge and be able to apply it analytically. Higher education is about more than just producing skilled acolytes, important though that undoubtedly is. It is also about producing people who can lead, who can produce new knowledge, who can see new problems and imagine new ways of approaching old problems. Higher education has a role to prepare people to go beyond the present and to be able to respond to a future that cannot now be imagined. To go

Page 7: 5. Lee harvey 2008

beyond the givens: people who can draw upon a variety of explanatory frameworks and who can also stand outside them to the extent of recognising their limitations and the degree to which any framework limits, as well as enables, thinking and feeling. Developing critical ability involves encouraging students to challenge preconceptions, their own, their peers and their teachers. It is about the process of coming to understand. This form of empowerment is at the heart of the dialectical process of critical transformation. Critical transformative action involves getting to the heart of an issue while simultaneously setting it in its wider context. It is a matter of conceptually shuttling backwards and forwards between what the learner already knows and what the learner is finding out, between the specific detail and its broader significance, and between practice and reflection. Transformative learning involves a process of deconstruction and reconstruction. Abstract concepts need to be made concrete and a core or essential concept identified as a pivot for deconstructive activity. Deconstruction gets beneath surface appearances; be they traditional modes of working, taken-for-granted attitudes, embedded values, prevailing myths, ideology or ‘well-known’ facts. The core concept is used to ‘lever open’ the area of investigation. That is, the relationship between the core concept and the area of enquiry is investigated at both an abstract and a concrete level to explore whether underlying abstract presuppositions conflict with concrete reality. Not all concepts will provide a suitable lever — indeed, critical reflective activity involves a constant process of exploration and reflection until the appropriate lever is located. It is like trying to lever the lid off a tin by using and discarding a number of likely tools until one does the job. Then it’s time to sort out the contents. This is what Karl Marx (1867) did, of course, in his deconstruction of relations of production in Kapital. The conventional economic view of supply and demand did not explain the impoverished situation of 19th century factory workers and he deconstructed profit to one of commodification. His pivotal concept was that of the commodity form which when applied to the labour of the workers themselves provided an understanding of exploitation. Similarly, a century later, Christine Delphy’s (1984) analysis of housework, amidst the feminist arguments that housework should be a paid activity, shifted the concept of housework from a set of tasks to a relation of production within the household unit. Housework is unremunerated work done by one family member for another. To see it as a set of tasks reflects a patriarchal ideology that conceals the actual nature of the exploitative relationship. Transformation learning is not just about adding to a student’s stock of knowledge or set of skills and abilities. At its core, transformation, in an educational sense, refers to the evolution of the way students approach the acquisition of knowledge and skills and relate them to a wider context.

Page 8: 5. Lee harvey 2008

This approach to learning also gives priority to the transformative notion of quality1. A critical transformative view asks how does quality assurance help transform the conceptual ability and self-awareness of the student. So, for me, the criterion for evaluating the changes in higher education and the impact of quality assurance is: “has transformative learning been enabled or encumbered?” The jury is still out on this but I’m optimistic that new approaches that reinstate trust and focus on student learning, such as in Scotland, will impact on transformative learning. But there is a monster lurking in the wings. Rankings I said I’d come back to ranking lists (league tables) as we call them in the UK. What’s good about them? They are easy to use. What’s bad about them? Most everything else. Ranking of higher education institutions has been with us for some time but it has only recently come to the fore, not least because of the increasing marketisation of higher education and the competition for international students. Purposes The purposes of rankings include: • providing easily interpretable information on the standing of higher education

institutions; • stimulating competition among institutions; • differentiating different types of institutions and programmes; • contributing to national quality evaluations; • contributing to debates about the definition of ‘quality’ in higher education

(CHE/CEPES/IHEP, 2006). However, the link to quality is naïve. The construction of indices by which institutions or departments are ranked is arbitrary, inconsistent and based on convenience measures. The operationalisations of the concept of quality is cursory at best and nearly all focus on just one notion of quality, that of excellence. Critiques of ranking

1 The transformative conception of quality is a meta-quality concept. Other concepts, such as excellence, consistency, fitness for purpose and value for money, are possible operationalisations of the transformative process rather than ends in themselves (Harvey, 1994, p. 51; Harvey & Knight, 1996, pp. 14–15). ). They are, though, inadequate operationalisations, often dealing only with marginal aspects of transformative quality and failing to encapsulate the dialectical process.

mpita
So, for me, the criterion for evaluating the changes in higher education and the impact of quality assurance is: “has transformative learning been enabled or encumbered?”
Page 9: 5. Lee harvey 2008

Rankings are not usually based on a single indicator but are compiled using a set of indicators that are combined into a single index. To be valid, the index should be the operationalisation of a theoretical concept: what the ranking is supposed to be comparing. The classic approach (Lazarsfeld et al., 1972) has six stages. First, a clear statement and definition of what the concept is that is being measured. Second, identification of the different dimensions of the theoretical concept. Third, specification of sets of indicators that represent aspects of each dimension. Fourth, selection of an appropriate indicator or suite of indicators from the range of possible indicators. Fifth, weighting of each indicator within each dimension on the basis of a theoretically sound understanding of the importance of the indicator as a measure of the dimension. Sixth, a weighting of each dimension of the concept and the combining of the dimensions into a single index (Harvey, 2008). Critiques of rankings come from an array of sources and are based on methodological, pragmatic, moral and philosophical concerns. They include: Selection of indicators: there is little evidence that this is involves theoretical reflection, rather, rankings are based on convenient data. Thus, difficult concepts such as teaching quality are excluded, even when relevant (Van Dyke 2005). This results in considerable bias:

Publication counts often stress established refereed journals …published in English and selected with the norms of the major academic systems of the United States and Britain in mind…. Using international recognition such as Nobel Prizes as a proxy for excellence downplays the social sciences and humanities, fields in which Nobels are not awarded, and causes further disadvantages for developing countries and smaller universities around the world. Using citation counts… emphasize material in English and journals that are readily available in the larger academic systems. It is well known, for example, that American scientists mainly cite other Americans and tend to ignore scholarship from other countries. This may artificially boost the ranking of US universities….It is also the case that universities with medical schools and strength in the hard sciences generally have a significant advantage because these fields generate more external funding, and researchers in them publish more articles. (Altbach, 2006)

Weighting of indicators: there is no evidence of theoretically underpinned weightings; in practice they are assigned arbitrarily (Usher and Savino, 2006). There is also no consistency in weights between different ranking systems (Stella and Woodhouse, 2006). Reliability: newspaper publishers may like the drama of significant changes in league position year on year but this hardly accurately reflects real changes in institutions, which are rarely so dramatic as to be noticeable within 12 months. Yet in the THES ranking Osaka, Japan went from position 69 in 2004 to 105 in 2005 and back to 70 in 2006. Ecole Polytechnique, France, moved from position 27 in 2004 to 10 in 2005 and to 37 in 2006. The University of Geneva went from not being ranked in 2004, to position 88 in 2005 to position 39 in 2006 (RFSU, 2008) Statistical insignificance: rankings place institutions in order, often based on minute, statistically insignificant, differences in scores. The UK National Student Survey ranks institutions despite the fact that the standard error of the measures encompasses a large

Page 10: 5. Lee harvey 2008

proportion of the listed institutions. One specialist institution, for example, dropped 60 places from one year to the next. Institutional, political and cultural environment: the context affects how institutions operate and what they can do. These contexts need to be borne in mind but this rarely happens when compiling international ranking lists. Focus of attention: rankings of institutions treat them as homogeneous when there is huge variability within institutions. Competition: ranking exacerbates competition between institutions resulting in less cooperation, which can be detrimental to the student and the institution as well as higher education in general. Impact of ranking systems on higher education Rankings have had an impact far beyond that which their arbitrary design would warrant. That is why they are important and, in their current form, dangerous. A ranking position in a league table is a statistic easily banded about by politicians and university senior managers as well as by teachers and union representatives when it suits. Institutions According to Thakur (2007, p. 89) there is evidence that ranking systems have impacted on higher education institutions and their stakeholders. For example, one of the top universities in Malaysia, the University of Malaya:

dropped 80 places in the THES rankings without any decline in its real performance due to definitional changes. This resulted in a replacement of the Vice-Chancellor and embarrassed the university, which claimed in an advertisement two months shy of the 2005 THES results, that it strived to be among the 50 best universities by 2020.

Most impacts at the institutional level are rather less dramatic than this example. Many institutions, directly or indirectly influenced by ranking systems, have developed mission statements to become one of the best or top-ranked universities. This means the loss of freedom and independence for institutions to control their brand and the terms of their success (Carey, 2006). Employers There is evidence that employers use ranking lists as part of the selection of graduate recruits. British graduate recruiters, for example, have an implicit hierarchy of UK institutions and in extreme cases will only recruit from certain universities. They have also had a notional set of preferences for international institutions, which the advent of global ranking lists has reinforced.

Page 11: 5. Lee harvey 2008

Equity and access Rankings pose a threat to equity as they encourage recruitment of select students. (Clarke, undated). van Vught (2007) has likewise argued that competition between institutions world-wide has increased the ‘reputation race’, which is costly for institutions. A significant consequence, in some countries such as the US, is that advantaged students are able to access the high-reputation institutions while less advantaged students miss out. Examples of Rankings Rankings are both international and national, the former trying to rank institutions on a world-wide or continental basis. Most rankings, national or international are of institutions, even if sometimes of a subset of institutions, such as business schools. A few rankings, usually on a national basis, attempt rankings on a subject basis. Some rankings are based on research indicators, some on teaching, some on the institutional web site, some attempt to be comprehensive, some rely on student evaluations. In most cases, the rankings are ‘unofficial’ commissioned and published by commercial organisations. Some unofficial rankings exercises are conducted by units within universities.

The Shanghai Academic Ranking of World Universities The Institute of Higher Education, Shanghai Jiao Tong University, undertook an exercise to find out the gap between Chinese universities and world-class research universities. They published the Academic Ranking of World Universities (ARWU) and were surprised by the extent of the interest. Institutions are ranked on the basis of how many alumni and staff win Nobel Prizes and Fields Medals, the number of Highly Cited Researchers in twenty-one broad subject categories, articles published in Nature and Science, articles indexed in Science Citation Index-Expanded (SCIE) and Social Science Citation Index (SSCI). (Liu and Cheng, 2005) The criteria are very particular and specific to research output with no reference to teaching other than the subsequent research success of alumni. However, it is an odd operationalisations of university research excellence. Apart from being highly skewed its construction and weighting is idiosyncratic. An analogy would be like awarding points to football clubs every time their players turn out for their national team. That this ingenious but contrived ranking should have world-wide currency is baffling. The status afforded the Shanghai ranking suggests that once a ‘league table’ is published, people stop looking at its basis and take the statistical data as somehow objective—especially if the rankings reinforce the status quo prejudices and preconceptions about reputation.

Page 12: 5. Lee harvey 2008

Times Higher Education Supplement World University Ranking (THES) The World University Ranking of the Times Higher Education Supplement (THES) lists 200 universities based on five qualitative and quantitative indicators. Peer review was used: 5000 academics from five continents, regarded as experts in their field, were asked to nominate leading universities in their fields of expertise. Initially, this peer review accounted for 50% of the total number of points for each university, which dropped to 40% in 2005, when an employer survey was introduced. The employer survey, of 1471 recruiters at international corporations, asked respondents to identify the 20 best universities from which they prefer to recruit graduates. The THES listing has been widely criticised but also widely used. As with the Shanghai ranking, the component parts of the THES ranking are idiosyncratic and the weighting is arbitrary with no evident basis in theory. The main concern of critics tends to be the peer review judgements of reputation.

For its part, the THES ranking list relies heavily on subjective evaluations by experts. How valid the latter are and how well they represent the institutions covered are important questions that are left unanswered. (RFSU, 2008)

Although alluding to the THES survey, Stella and Woodhouse (2006, pp. 8–9) raise some general objections to the use of expert opinion including ‘halo effect’, inadequate knowledge bases, academic jealousy and conclude that little scientific justification exists for continuing to rely on expert opinion. This is somewhat ironic coming from a quality assurance agency that uses peer review. Controlling the ranking phenomenon In response to the growing consumer demand UNESCO initiated the International Ranking Expert Group (IREG), which adopted the Berlin Principles on Ranking of Higher Education Institutions (CHE/CEPES/IHEP, 2006) in order to promote a system of continuous improvement of rankings (IHEP, 2008). The Principles, are a set of normative statements designed to ensure that ‘those producing rankings and league tables hold themselves accountable for quality in their own data collection, methodology, and dissemination’. Some universities have become so concerned about rankings that they have refused to participate. For example, in 1999, the University of Tokyo and 19 Mainland Chinese universities, including Peking University, as well as 15 other institutions declined to provide data to Asiaweek magazine for its annual ranking of universities in the Asia-Pacific region. Asiaweek (1999) abandoned its ranking shortly thereafter. Recently, a group of eleven institutions in Canada indicated that they would no longer participate in Maclean’s magazine annual ranking of universities in that country (Birchard, 2006). Maclean’s response was that it would continue to rank these institutions using data from other public sources, thus the willing participation of institutions is no longer necessary.

Page 13: 5. Lee harvey 2008

Stakeholder concerns Despite the Berlin Principles there are widespread concerns about the ranking of higher education from a range of stakeholders. An informal ministerial meeting in Tokyo in 2008 concluded that the ‘bias in the information base of existing rankings towards research outcomes could detract from efforts to improve educational performance’ (OECD, 2008). Stella and Woodhouse (2006) of the Australian Universities Quality Agency (AUQA) argued that ranking contravenes a fitness-for-purpose approach, which at least in theory is at the heart of most quality agency approaches. Ranking judges against a set of generic criteria, which are harmful to institutional diversity The European Students’ Union (ESU, 2006) objects to the elitism generated by rankings and claim that they do not really inform students. A meeting of the Higher Education and Research Standing Committee in Oslo objected to ranking because it is ‘irreconcilable with the principle of equity and the educational missions which meets the different aspirations of students’, ignores cultural context, increases marketisation and most importantly state:

The pressures of the outcome of ranking systems also deviates the attention of leaders of higher education institutions from the students and the genuine purpose and mission of higher education, enhancing competition between institutions. In this sense, there is a real risk that higher education institutions focus on efforts to climb up the ranking ladder, ignoring their mission in developing and disseminating knowledge for the advancement of society. Furthermore, ranking places too much emphasis on institutions and ignores study programmes. (Education International, 2006, p.1)

Threat In conclusion, rankings provide a real threat to quality processes, The simplistic processes and idiosyncratic construction of league tables appear to have more popular appeal and even credibility amongst a range of stakeholders, including political decision makers, than the meticulous hard work of quality agencies. Among the many detrimental consequences associated with university rankings, the fundamental problem is that they do not appear to ‘reward’ teaching. In particular, contrary to some recent developments in quality assurance that focus on enhancing the student experience of learning, rankings place a potential brake on the development of critical transformed learners. Developing a critical education is not a way to move up league tables. To take a football analogy, there are no points won from having a good youth policy. The points come from the stars on the pitch. We all know, though, that without a good youth policy, clubs have to buy in the future stars or if they don’t have the money they start to decline. The point, though, is that an over-concern to move up ranking tables will lead to

Page 14: 5. Lee harvey 2008

a focus on those aspects that get included in the ranking methodology, in the main, these do not include developing transformative learning. References Alderman, G. and Brown, R., 2008, ‘Academic fraud: how to solve the problem’, ,

Education Guardian, 26 June, 2008, available at http://www.guardian.co.uk/education/2008/jun/26/highereducation.uk3, accessed, 1 August, 2008.

Altbach, P.G., 2006, ‘The Dilemmas of Ranking’, Bridges, 12. December 14. http://www.ostina.org/content/view/1669/638/ accessed, 3 August 2008.

Asiaweek 1999, Missing your school? http://cgi.cnn.com/ASIANOW/asiaweek/universities/missing.html, accessed 6 August 2008.

Asthana, A., 2008, ‘Bosses question value of degrees as they search for talented recruits’, Observer, 6 July, 2008, available at http://www.guardian.co.uk/education/2008/jul/06/highereducation.uk, accessed 8 August, 2008.

Astin, A. W., 1990, ‘Assessment as a tool for institutional renewal and reform’, in American Association for Higher Education Assessment Forum, 1990, Assessment 1990: Accreditation and Renewal, AAHE, Washington, D.C., pp. 19-33.

Birchard, K., 2006, ‘A group of Canadian universities says it will boycott a popular magazine ranking’, The Chronicle of Higher Education [Online]. http://chronicle.com/daily/2006/08/2006081506n.htm, accessed 8 August 2008.

Carey, K., 2006, College Rankings Reformed: The Case for a New Order in Higher Education. Washington, DC: Education Sector Reports, 19.

Centre for Higher Education Development (CHE), UNESCO-European Centre for Higher Education (CEPES), Institute for Higher Education Policy (IHEP), 2006, Berlin Principles on Ranking of Higher Education Institutions, Berlin, 20 May 2006, www.che.de/downloads/Berlin_Principles_IREG_534.pdf (also available at the web sites of CEPES and IHEP)

Clarke, M., undated, ‘University rankings and their impact on students’ http://nust.edu.pk/general/University%20rankings%20and%20their%20impact%20on%20students472.doc accessed, 5 August 2008.

Delphy, C., 1984, Close to Home: A Materialist Analysis of Women's Oppression. London, Hutchinson.

Education International, 2006, Higher Education and Research Standing Committee, Oslo, September 2006, Ranking of Higher Education Institutions available as http://www.ei-ie.org/highereducation/file/(2006)%20Statement%20of%20the%20Higher%20Education%20and%20Research%20Standing%20Committee%20on%20the%20Ranking%20of%20Higher%20Education%20Institutions%20en.pdf, accessed 4 August 2008.

European Students’ Union (ESU), 2008, Policy Paper "Excellence", Tuesday, 20 May 2008 10:08, http://www.esib.org/index.php/documents/policy-papers/187-policypapers/333-policy-paper-qexcellenceq?tmpl=component&print=1&page=, accessed, 3 August 2008.

Page 15: 5. Lee harvey 2008

Harvey, L. and Knight, P., 1996, Transforming Higher Education. Ballmoor, SHRE/Open University Press.

Harvey, L., 1994, ‘Continuous quality improvement: A system-wide view of quality in higher education’, in Knight, P. T., (ed.) 1994, University-Wide Change, Staff and Curriculum Development, Staff and Educational Development Association, SEDA Paper, 83, May 1994, pp. 47–70.

Harvey, L., 2006, ‘Impact of Quality Assurance: Overview of a discussion between representatives of external quality assurance agencies’, Quality in Higher Education, 12(3), p. 287.

Harvey, L., 2008, ‘Ranking of higher education institutions: a critique’, Forthcoming. Institute for Higher Education Policy (IHEP), 2008, IHEP's Efforts to Improve Ranking

Systems, http://www.ihep.org/Research/rankingsystemsresources.cfm, accessed, 5 August, 2008.

Lazarsfeld, P.F., Pasanella, A.K., Rosenberg, M., 1972, Continuities in the Language of Social Research, New York, Free Press.

Lipsett, A., 2008a, ‘Academics urge inquiry into claims of fraud’, Education Guardian, 26 June, 2008, available at http://www.guardian.co.uk/education/2008/jun/26/highereducation.uk1, accessed, 1 August, 2008.

Lipsett, A., 2008b, ‘UK degrees are 'arbitrary and unreliable', says watchdog’, Education Guardian, 24 June, 2008, available at http://www.guardian.co.uk/education/2008/jun/24/highereducation.uk3, accessed, 1 August, 2008.

Lipsett, A., 2008c, ‘Video games degrees: 95% fail to hit skills target’, Education Guardian, 26 June, 2008, available at http://www.guardian.co.uk/education/2008/jun/24/highereducation.uk2 accessed, 1 August, 2008.

Liu N.C. and Cheng, Y., 2005, ‘Academic Ranking of World Universities – Methodologies and Problems’ Higher Education in Europe 30(2).

Marx, K., 1867, Das Kapital: Kritik der politischen Oekonomie. Hamburg, Verlag von Otto Meisner. (First English Edition (4th German edition with changes) 1887. Published by Progress Publishers, Moscow. Available at http://www.marxists.org/archive/marx/works/1867-c1/index.htm, accessed, 3 August, 2008.)

Newton, J., 2000, ‘Feeding the beast or improving quality?: academics perceptions of quality assurance and quality monitoring’, Quality in Higher Education, 6 (2), pp. 153–62.

Organisation for Economic Co-operation and Development (OECD), Directorate for Education 2008, Informal OECD Ministerial Meeting on evaluating the outcomes of Higher Education, Tokyo, 11-12 January 2008, Chaired by Kisaburo Tokai, Minister for Education, Culture, Sports, Science and Technology, Japan at http://www.oecd.org/document/45/0,3343,en_2649_33723_39903213_1_1_1_1,00.html, accessed 3 August 2008.

Ranking Forum of Swiss Universities (RFSU), 2008, Introduction: University rankings based on football league tables, August 2008,

Page 16: 5. Lee harvey 2008

http://www.universityrankings.ch/en/on_rankings/introduction accessed, 4 August, 2008.

Shepherd, J., 2008, ‘Universities accused of awarding undeserved marks, Education Guardian, 17 June, 2008, available at http://www.guardian.co.uk/education/2008/jun/17/highereducation.uk4, accessed, 1 August, 2008.

Snowden, C., 2008, ‘We cannot rely on industry to develop our graduates Guardian, 24 June, 2008, available at http://www.guardian.co.uk/education/2008/jun/24/highereducation.uk, accessed, 1 August, 2008.

Stella, A. and Woodhouse, D., 2006, Ranking of Higher Education Institutions. Occasional Publications Series No: 6, Melbourne, AUQA., http://www.auqa.edu.au/files/publications/ranking_of_higher_education_institutions_final.pdf, accessed 6 August 2008.

Stensaker, B., 2007, ‘Impact of quality processes’ in Bollaert, L., Brus, S., Curvale, B., Harvey, L., Helle, E., Jensen, H.T., Komljenovic, J., Orphanides, A. and Sursock, A., (Eds.), 2007, Embedding Quality Culture In Higher Education: A Selection Of Papers From The 1st European Forum For Quality Assurance, pp. 59–62, Brussels, European University Association. Available at www.eua.be/fileadmin/user_upload/files/Publications/EUA_QA_Forum_publication.pdf, accessed 1 May 2008.

Thakur, M., 2007, ‘The impact of ranking systems on higher education and its stakeholders’, Journal of Institutional Research 13(1), 83–96.

Times Higher Educational Supplement (THES) 2007, World University Rankings, 9 November 2007. http://www.timeshighereducation.co.uk/hybrid.asp?typeCode=142&pubCode=1&navcode=118

Usher, A., & Savino, M., 2006, A World of Difference: A global survey of university league tables. January, Canadian Education Report Series, Toronto, Educational Policy Institute, http://www.educationalpolicy.org/pdf/World-of-Difference-200602162.pdf accessed 6 August 2008.

Van Dyke, N. 2005, ‘Twenty years of university report cards’, Higher Education in Europe, 30(2).

van Vught, F., 2007, ‘Diversity and Differentiation in Higher Education Systems’, presentation at the CHET anniversary conference, Cape Town, 16 November 2007, http://www.universityworldnews.com/filemgmt_data/files/Frans-van-Vucht.pdf, accessed 6 August, 2008.

Williams P., 2008, ‘Quality: easy to say, harder to put into practice’ Guardian, 1 July, 2008, http://www.guardian.co.uk/education/2008/jul/01/highereducation.uk, accessed, 1 August, 2008.