what is the difference between criteria and standards

41
European Network for Quality Assurance in Higher Education • Helsinki The Danish Evaluation Institute Quality procedures in European Higher Education An ENQA survey ENQA Occasional Papers 5

Upload: xredleaf

Post on 12-Nov-2014

884 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: What is the Difference Between Criteria and Standards

1

ENQA Occasional Papers

European Network for Quality Assurancein Higher Education • Helsinki

The Danish Evaluation Institute

Quality procedures inEuropean Higher Education

An ENQA survey

ENQA Occasional Papers 5

Page 2: What is the Difference Between Criteria and Standards

2

ENQA Occasional Papers

The views expressed in this Occasional Paper are those of the authors. Publication does not implyeither approval or endorsement by the European Network for Quality Assurance in Higher Educationor any of its members.

© European Network for Quality Assurance in Higher Education, 2003, HelsinkiThis publication may be photocopied or otherwise reproduced without any specific permissionof the publisher.

ISBN 951-98680-8-9 (paperbound)ISBN 951-98680-9-7 (pdf)ISSN 1458-1051

Printed by MultiprintHelsinki, Finland 2003

Page 3: What is the Difference Between Criteria and Standards

3

ENQA Occasional Papers

Foreword

In the Prague Communiqué of 19 May 2001, the European Ministers of Education called upon the univer-sities, other higher education institutions, national agencies and ENQA to collaborate in establishing acommon framework of reference, and to disseminate good practice. This mandate has been taken up byENQA as a challenge to work even more actively in the process towards ensuring a credible Europeanquality assurance environment.

A major focus in this process is the extent, to which national external quality assurance procedures maymeet the Bologna requirements for European compatibility and transparency.

This focus is reflected in ENQA’s decision to initiate this major survey, which was in its first phase alsoincluded in discussions with the European University Association (EUA) and the National Unions ofStudents in Europe (ESIB). The main purpose of this survey is to identify shared protocols of qualityassurance among European countries. Accordingly each European agency has been asked to fill in aquestionnaire detailing the evaluation practices in place in the agency.

The survey is thus able to determine which evaluation models are used in various countries and toanalyse basic similarities and dissimilarities. The results of the survey demonstrate that European qualityassurance has extended both in scope and type of evaluation methods since the late 1990s, and thatespecially the concepts of accreditation and benchmarking are gaining new ground fast. In terms similarto my concluding remarks in a Status Report from 1998 on European Evaluation of Higher Education Ican state, however, that in a sense the status quo may be described as either a glass that is half full becausethe European evaluation procedures in place all build on the same methodological principles. Or the glassmay be described as half empty because the comparative analysis of the survey demonstrates many differ-ences between the application of the methods to the specific national and institutional contexts.

So the results of this survey stress ENQA’s growing significance as a framework for sharing and devel-oping European best practises in external quality assurance and the need for even closer cooperation bothwith ENQA member agencies and governments and with our partners, the EU Commission, the universi-ties and the students.

I hope therefore that the reader will find the survey and the new information it contains useful andinspirational.

Christian ThuneChairmanENQA Steering Group

Page 4: What is the Difference Between Criteria and Standards

4

ENQA Occasional Papers

Contents1 Introduction________________________________________________________________ 51.1 Used method _______________________________________________________________ 51.2 Project organisation _________________________________________________________ 6

2 Executive summary_________________________________________________________ 72.1 The quality assurance agencies _______________________________________________ 72.2 Types of evaluation in European quality assurance _______________________________ 82.3 The four-stage model ________________________________________________________ 82.4 Criteria and standards _______________________________________________________ 9

3 European quality assurance agencies_______________________________________ 103.1 Quality assurance agencies in the future concept of

the European Higher Education ______________________________________________ 103.2 Functions of the European quality assurance agencies __________________________ 123.3 Objectives of evaluations ____________________________________________________ 133.4 Organisation and funding ____________________________________________________ 14

3.4.1 Composition of the board / council ______________________________________ 143.4.2 Funding ______________________________________________________________ 15

3.5 Summary __________________________________________________________________ 16

4 Types of evaluation in European quality assurance ___________________________ 174.1 The evaluation landscape ___________________________________________________ 174.2 Evaluation _________________________________________________________________ 184.3 Accreditation ______________________________________________________________ 194.4 Audit______________________________________________________________________ 204.5 Benchmarking _____________________________________________________________ 204.6 Variety of evaluation types in European quality assurance _______________________ 214.7 Summary __________________________________________________________________ 22

5 The Four-Stage Model _____________________________________________________ 235.1 Composition and appointment of the expert panel ______________________________ 245.2 Functions and responsibilities of the expert panel ______________________________ 265.3 Self-evaluation _____________________________________________________________ 285.4 Statistical data _____________________________________________________________ 295.5 Supplementary surveys _____________________________________________________ 295.6 Site visits __________________________________________________________________ 305.7 Report and follow-up _______________________________________________________ 315.8 Summary __________________________________________________________________ 33

6 Criteria and standards _____________________________________________________ 346.1 Use of criteria and standards in European quality assurance _____________________ 346.2 Use of criteria and standards in accreditation __________________________________ 356.3 Use of criteria and standards in other types of evaluation ________________________ 366.4 Summary __________________________________________________________________ 37

Appendix A: Participating agencies and the use of different evaluation types ____________ 38Appendix B: Definitions __________________________________________________________ 41

Page 5: What is the Difference Between Criteria and Standards

5

ENQA Occasional Papers

1 Introduction

Since 1999, the European concept of the quality ofhigher education has been strongly influenced bythe follow-up process of the Bologna Declarationin which the EU Ministers of Education called formore visibility, transparency and comparability ofquality in higher education. Two years after theBologna Declaration and three years after theSorbonne Declaration, the European Ministers incharge of higher education, representing 32 signa-tories, met in Prague in order to review the progressso far and to set directions and priorities for thecoming years. Ministers reaffirmed their commit-ment to the objective of establishing the EuropeanHigher Education Area by 2010.

The Prague Communiqué of 2001 challengedthree organisations, the European University Asso-ciation (EUA), the National Unions of Students inEurope (ESIB), the European Network for QualityAssurance in Higher Education (ENQA), and theEuropean Commission to collaborate in establish-ing a common framework of reference and to dis-seminate best practises. The quality culture and theimplementation of quality assurance need to bestrengthened among the members. The definitionof quality indicators is an important element in thatprocess. Based on this mandate, ENQA took theinitiative of inviting the leadership of the EuropeanUniversity Association (EUA) and the NationalUnions of Students in Europe (ESIB) to a first meet-ing in June 2001 in order to discuss mutual inter-ests and possible grounds for cooperation. Accord-ingly, a plan for joint projects was discussed andthis report is the first concrete example of this ini-tiative. It is a survey that, for the first time, summa-rises in detail the various evaluation methods usedin Europe.

1.1 Used method

The aim of the project is to document and analysethe methodological state-of-the-art in general termsin all ENQA member countries and associated mem-ber countries, with the emphasis on the types ofevaluation used.

For this purpose, the Danish Evaluation Instituteconducted a questionnaire survey in 2002. A sub-stantial background has been Evaluation of Euro-pean Higher Education – A Status Report of 1998prepared by EVA’s predecessor, the Danish Centrefor Quality Assurance and Evaluation, for the Eu-ropean Commission, DG XXII. Other major sourcesof inspiration are the European Pilot Project forEvaluating Quality in Higher Education from 1995,and especially the Council Recommendation of1998, which referred very explicitly to the Euro-pean pilot projects:

“This recommendation envisages the introductionof quality assurance methods in higher educationand the promotion of European cooperation in thisfield. It recommends to the Member States to estab-lish transparent quality assessment and quality as-surance systems, which should be based on anumber of common principles. These principleshave been established in earlier European pilotprojects in this field and relate mainly to the au-tonomy of the institutions responsible for qualityassurance, the respect of the autonomy of the highereducation institutions, the various stages in the as-sessment procedure and the publication of qualityassessment reports. Also, Member States are rec-ommended to take follow up measures to enablehigher education institutions to implement theirplans for quality improvement, which may be theresult of a quality assessment procedure.”1

1 The Council Recommendation (98/561/EC) and the StatusReport of 1998 may both be found on the ENQA website at:www.enqa.net.

Page 6: What is the Difference Between Criteria and Standards

6

ENQA Occasional Papers

Furthermore, Malcolm Frazer’s Report on theModalities of External Evaluation of Higher Edu-cation in Europe: 1995–1997 from 1997 and theENQA report Quality Assurance in the NordicHigher Education – on Accreditation-like-practisesfrom 2001 have been sources of inspiration.

The questionnaire was circulated among themembers of the ENQA Steering Group for consid-eration, resulting in comments and suggestions be-ing incorporated in the questionnaire.

Member agencies and national agencies of theassociated countries were asked to fill in the ques-tionnaire to provide information for the report.Thirty-four quality assurance agencies in 23 coun-tries completed and returned the questionnaire. Acomplete list of participants is shown in Figure 1on page 11. Subsequently, telephone interviewswere made with the agencies for validation andclarification. The preliminary results of the surveywere presented at the ENQA General Assembly inCopenhagen, May 2002. The final results in thepresent report, presented as a comparative overviewof the state-of-the-art and current trends in Euro-pean quality assurance, will be followed by a data-base on the Internet, including country-specific re-ports, at the ENQA website.

In the processing of the data provided by thequestionnaires, certain quantification has been help-

ful. Nevertheless, the material is too limited to re-ally justify statistical analysis, and the subsequentinterviews served to clarify the data. An overviewof the core data is appended as Appendix A.

Originally the survey was supposed to be fol-lowed by an in-depth study of different quality as-surance approaches in Europe. At present the valueof such a study is, however, given further consid-eration, and the results of the survey, as presentedin this report, are to be read independently.

1.2 Project organisation

Christian Thune, Executive Director of the DanishEvaluation Institute and Chairman of ENQA, wasin charge of the project. A project team consistingof Evaluation Officers Tine Holm and Rikke Sørup,and Evaluation Clerk Mads Biering-Sørensen, haveconducted the survey and drafted the report. TineHolm acted as co-ordinator of the project.

The ENQA Secretariat will be responsible for theproduction of a database and its publication on theENQA website. The national agencies will be in-vited to take upon them the responsibility for up-dating their country-specific descriptions.

Page 7: What is the Difference Between Criteria and Standards

7

ENQA Occasional Papers

The Council Recommendation of 24 September1998 on European Cooperation in Quality Assur-ance in Higher Education suggests that memberstates establish quality assurance systems for highereducation. The systems should be based on certaincharacteristics identified as common to quality as-surance systems, including: The creation of an au-tonomous body for quality assurance, targeted uti-lisation of internal and/or external aspects of qual-ity assurance, the involvement of various stake-holders, and the publication of results.

The Council Recommendation proceeds to iden-tify these elements in the context of a process in-volving independent quality assurance organisa-tions, an internal self-examination component andan external composed based by appraisal and visitby external experts and the publication of a report.This is in fact the so-called four-stage model al-ready introduced in 1994-95 as the methodologi-cal framework of the European Pilot Projects andin the Status Report of 1998 later identified andanalysed in its various national interpretations.

In 2001 ENQA in cooperation with the Euro-pean Commission decided to re-examine the stateof the art of the European Quality Assurance fouryears after the recommendation was issued and thestatus report published.

The aim of the resulting project is to describethe methodological state-of-the-art in general termsin all ENQA member countries and associatedmember countries. The project focuses on the level,scope and methods of evaluation used. The methodemployed is a questionnaire filled in by 34 qualityassurance agencies in 23 countries. Fourteen ofthese agencies cover both the university and thenon-university sectors, while another 14 only coverhigher education at universities. The remaining 6agencies cover only non-university higher educa-tion.

2.1 The quality assuranceagencies

The results of the survey demonstrate that since1998, European quality assurance has extended inscope and in terms of the emergence of new Euro-pean agencies. In most European countries autono-mous quality assurance agencies have been estab-lished on national or regional level. The phenom-enon is most common in the university sector (28agencies in this survey) but also the non-universitysector is being embraced by quality assurance (20agencies in this survey). Some agencies cover bothsectors; some agencies only cover one sector or theother. This difference in organisation typically findsits explanation in the structures of the national highereducational systems.

The survey shows that the quality assurance agen-cies still and foremost perform quality assuranceand/or enhancement in the traditional sense as docu-mented in the pilot projects from 1995, but the taskshave expanded. The vast majority of the participat-ing agencies answer that quality assurance is boththe overall main function of the agency as well asthe predominant objective of the performed evalua-tion activities.

But the survey also points at a tendency that theagencies to an increasing degree provide expertopinions and advise to both government and highereducation institutions and investigate and even de-cide on certain legal matters pertaining the HE in-stitutions. This is reflected in the fact that 4/5 of theagencies mentions ‘Disseminating knowledge andinformation’ as a function of the agency, and halfthe agencies mentions ‘accreditation’ as a functionof the agency.

The appearance of accreditation agencies andhence the performance of accreditation activities go

2 Executive summary

Page 8: What is the Difference Between Criteria and Standards

8

ENQA Occasional Papers

hand in hand with an increased focus on account-ability as objective of the performed activities. 3/4of the participating agencies mention it as an ob-jective of the activities, and the same is the casewith transparency. Also comparability – nationallyas well as internationally – is a highly emphasisedobjective.

Most agencies have a board or a council, and allthese have some kind of academic board members.In 2/3 of the cases the higher education institutionsare represented among these academic board mem-bers. In half the cases labour market representativesare on the board, in 1/3 of the cases students are onthe board and in 2/5 of the cases government is rep-resented. The main source of funding of the evalu-ation activities is the government, but also the highereducation institutions are in some way or anothermentioned as source of funding in 1/3 of the cases.

There is a tendency that the board/council is moremultifaceted in the EU/EFTA countries than in theassociated countries, but the funding situation doesnot seem to differ much according to geography.

2.2 Types of evaluation inEuropean quality assurance

The results of the survey show that European qual-ity assurance can be identified as based on eightmain types of evaluation. The survey also demon-strates that most agencies carry out several types ofevaluation. It is shown that the principal types2 ofevaluation used in European quality assurance are‘accreditation of programmes’ and ‘evaluation ofprogrammes’. The majority of the participatingagencies use both on a regular basis.

In general programmes are the most frequentlychosen focus of the evaluation activities. This isespecially pronounced in the field of non-univer-sity education, whereas institutions are coming moreinto focus in university education. This is probablydue to the very strong professional emphasis of theprogrammes in the non-university field.

The most preferred method is still the traditionalevaluation that is used in combination with differ-ent foci regularly or occasionally in 49 cases. Andin contrast to earlier the tendency is that one agencyvery often uses evaluation on different levels, or inother words combines evaluation as a method withdifferent foci. Nevertheless accreditation as amethod comes close with 31 cases of regular or oc-casional use. Accreditation is most used in the as-sociated countries and in the Dutch and German-speaking countries. There do, however seem to bevery big variations in the procedures of accredita-tion, and the method could be a theme for furtherinvestigations.

An exception from the general statement aboveis ‘institutional audit’. Whereas audit is hardly usedon subject and programme level, or in combinationwith ‘theme’ as a focus, the combination of auditand institution is the third most popular type ofevaluation used. It is primarily used in the English-speaking countries.

Finally the results of the survey show that sev-eral agencies experiment with benchmarking – of-ten combined or integrated with other methods, butas an independent method it has not really gainedforce.

2.3 The four-stage model

The variety in evaluation types used also causes adifferentiation in the methodological elements usedcompared to 1998. For instance, there are exam-ples of accreditation procedures, where self-evalu-ation does not take place, where external expertsare not used, and where reports are not published.In general, however, the four stages mentioned inthe pilot projects and reflected in the Council Rec-ommendation are still common features in Euro-pean quality assurance.

All agencies use external experts. In most casesthese are experts representing the field, and veryoften international experts are included in the ex-pert panel, The latter may often be from neighbour-ing countries or countries sharing the same lan-guage. In a few cases students are included in the

2 The term ‘type of evaluation’ comprises a combination of thefocus of an evaluation and the method used.

Page 9: What is the Difference Between Criteria and Standards

9

ENQA Occasional Papers

expert panel. In general the expert panels seem moremultifaceted in the EU/EFTA-countries than in theassociated countries.

The experts are typically appointed by the qual-ity assurance agency, but in 1/3 of all cases highereducation institutions have taken part in the nomi-nation of the experts. The experts have varying func-tions and responsibilities. Their core function, how-ever, seems to be site visits, and in half the casesthey also write the reports without the assistance ofthe agency. In another third of all cases they draftthe reports in co-operation with agency staff. Theagency seems more involved in carrying out thedifferent functions of an evaluation process in theEU/EFTA-countries than in the associated countries.

Self-evaluation is included in 94% of the evalu-ations, but only in 68% of the accreditation proc-esses. Management and teaching staff are usuallypart of the self-evaluation group, whereas gradu-ates rarely participate. The participation of admin-istrative staff and students vary considerably, andfor the latter there seems to be a connection to themethod used: Students are usually represented inconnection with evaluations, but rarely in connec-tion with accreditation. As documentary evidence,the self-evaluations are in almost all cases suppliedwith statistical data, and in about half the cases alsowith some kind of supplementary surveys.

With the exception of two cases site visits arepart of all evaluation processes in Europe. The av-erage length of the site visits is two days, but sitevisits in connection with audits typically last longer.The results of the survey demonstrate a mutualagreement on the elements constituting site visits:Almost every participating agency works with in-terviews, tours of the facilities, with final meetingswith the management, and the examination of docu-mentary evidence. The most controversial elementof the site visits seems to be classroom observa-tions, which are used in 25% of the cases.

Reports are published in almost all cases of evalu-ation, but sometimes omitted in connection withaccreditation. The reports typically contain conclu-sions and recommendations, and very often theyalso contain analysis, while empirical documenta-tion is only included in 1/3 of all cases. It is com-

mon praxis to consult the evaluated institutions be-fore the reports are published, whereas other agentsare rarely consulted. In 3/4 of all cases the evalu-ated institutions are also responsible for follow-upof the recommendations, while the quality assur-ance agency and the government are responsible ina little less than half of the cases. But all respond-ents agree that follow-up takes place in one way oranother.

2.4 Criteria and standards

In addition to the four characteristics mentionedabove the results of the survey demonstrate that afifth characteristic is emerging as a common fea-ture, namely the use of criteria and standards.

Whereas in 1998, the terms of reference of theevaluation procedures were typically legal regula-tions, and the stated goals of the evaluated institu-tions, today almost all agencies apply some kind ofcriteria. This, of course, is true for accreditationprocedures, where threshold criteria or minimumstandards are used in order to pass judgement, butin other evaluation procedures as well, for instancewhen ‘good practice’ criteria are used. In severalcountries, however, the criteria used are not explic-itly formulated.

The questionnaires and the attached material fromthe European quality assurance agencies point tothe need for a number of features to be investigatedfurther when discussing the use of criteria and stand-ards: What is the difference between criteria andstandards? When does an agency work with thresh-old criteria, and when does it work with best prac-tice criteria? Is it important whether the criteria areexplicitly formulated or not? Who formulates thecriteria? And to what extent do agencies work witha pre-formulated set of criteria suitable for everyevaluation process?

There is no doubt that standards and criteria aresuitable tools in connection with transparency –nationally and internationally, but the issue is ofcourse the extent to which they promote the con-tinuous quality improvement at the institutions.

Page 10: What is the Difference Between Criteria and Standards

10

ENQA Occasional Papers

The Council Recommendation of 24 September1998 on European Co-operation in Quality Assur-ance in Higher Education proposes that memberstates establish quality assurance systems for highereducation. The systems should be based on certaincharacteristics identified as common to the qualityassurance systems. The characteristics include anautonomous body for quality assurance, targetedutilisation of internal and external aspects of qual-ity assurance, involvement of various stakeholdersand publication of results.

The characteristics of the quality assurance sys-tems, which were mentioned in the Council Rec-ommendation, are equal to the so-called four-stagemodel of independent agencies, self evaluations,visits by experts and a published report that wereall identified as common features of European qual-ity assurance in the European Pilot Project forEvaluating Quality in Higher Education of 1995(from now on referred to as the Pilot Projects), andin Evaluation of European Higher Education: AStatus Report of 1998 (from now on referred to asthe Status Report). Similarly Malcolm Frazer’sReport on the Modalities of External Evaluation ofHigher Education in Europe: 1995–1997 of 1997is organized around these characteristics. Since theCouncil Recommendation of 1998 the Europeanquality assurance context has been influencedgreatly by the Bologna process and it is thereforehighly relevant for the ENQA network to examinethe current state of affairs of the methodologies andprocedures applied today by European quality as-surance agencies.

This chapter presents the increase in the numberof bodies of European quality assurance in Euro-pean higher education. It examines which areas ofhigher education the agencies cover, the functionof the agencies, and elaborates on the issue of au-

tonomy3 . The following chapters examine the meth-ods and quality assurance procedures of Europeanquality assurance today.

3.1 Quality assuranceagencies in the futureconcept of the EuropeanHigher Education

In order to investigate the current status of Euro-pean quality assurance, a questionnaire was sent toquality assurance agencies in all EU member states,associated countries and EFTA-countries. In coun-tries lacking formal, established quality assuranceagencies, the questionnaire was sent to the author-ity in charge of quality assurance. The survey re-port is based on 36 responses from 34 Europeanquality assurance agencies in 23 European coun-tries4 .

As shown in Figure 1, it can be concluded thatthe Council Recommendation for establishing na-tional quality assurance systems has been followedby almost all member states. A picture of a well-established European system of quality assurancein higher education begins to emerge. Since theEuropean Pilot Project, almost all EU member statesand associated countries have established an evalu-ation agency responsible for promoting quality as-surance in the higher education sector.

3 European quality assurance agencies

3 The report does not indicate to what extent the individualagencies meet the Council recommendation. The competenceand responsibility for that rests solely with the ENQA GeneralAssembly. The report does, however, elaborate on the extent towhich the common features identified in the 1998 report canstill be found in European quality assurance in 2001.4 The number of questionnaires exceeded the number ofagencies. An agency may cover different sectors (e.g. universi-ties and non-universities). If the methods used vary accordingto the sectors covered, the agency was asked to fill in aquestionnaire for each sector.

Page 11: What is the Difference Between Criteria and Standards

11

ENQA Occasional Papers

Figure 1: European Quality Assurance Agencies in Higher Education and their Scope5

Name of Agency Country Scope

European University Association (EUA) University

Fachhochschulrat (FH-Council) Austria Non-university

Österreichischer Akkreditierungsrat (Austrian

Accreditation Council, AAC) Austria University

Vlaamse Interuniversitaire Raad Belgium University

Vlaamse Hogescholenraad (VLHORA) Belgium Non-university

Conseil des Recteurs (CRef) Belgium University

National Evaluation and Accreditation Agency at

the Council of Ministers (NEAA) Bulgaria Both

Council of Educational Evaluation-Accreditation Cyprus Non-university

Accreditation Commission of the Czech Republic Czech Republic Both

The Danish Evaluation Institute (EVA), non-university Denmark University

The Danish Evaluation Institute (EVA), university Denmark Non-university

Estonian Higher Education Accreditation Center Estonia Both

Finnish Higher Education Evaluation Council Finland Both

Comité Nationale d’Évaluation France University

Akkreditierungsrat Germany Both

Zentrale Evaluations- und Akkreditierungsagentur

Hannover (ZEvA) Germany, Niedersachsen University

Evaluation Agency of Baden-Württemberg Germany Both

Hungarian Accreditation Committee Hungary Both

Ministry of Education, Science and Culture. Division of

Evaluation and Supervision Iceland University

Higher Education and Training Awards Council (HETAC) Ireland Non-university

Higher Education Authority Ireland University

National Qualification Authority of Ireland Ireland Non-university

Comitato Nazionale per la Valutazione del Sistema

Universitario (CNSVU) Italy University

Higher Education Quality Evaluation Centre Latvia Both

Lithuanian Centre for Quality Assessment in

Higher Education Lithuania Both

Inspectorate of Higher Education The Netherlands Both

VSNU Department of Quality assurance The Netherlands University

Netherlands Association of Universities of Professional

Education (HBO-raad) The Netherlands Non-university

Network Norway Council Norway Both

State Commission for Accreditation Poland Both

National Institute for Accreditation of Teacher Education

(INAFOP), university Portugal University

National Institute for Accreditation of Teacher Education

(INAFOP), non-university Portugal Non-university

Agéncia per la Qualitat del Sistema Universitári

a Catalunya (AQU) Spain, University

National Agency for Higher Education (Högskoleverket) Sweden University

The Quality Assurance Agency for Higher Education (QAA) United Kingdom University

National Council for Academic Assessment and

Accreditation Romania University

5 Names of the agencies in Figure 1 and in the report in general refer to the questionnaires. Therefore it may differ whether the agenciesare named by their national names or by an English name.

Page 12: What is the Difference Between Criteria and Standards

12

ENQA Occasional Papers

The stated scope of any single agency in the col-umn above should be seen in the context of the na-tional educational system, as reflecting the nationaldistinctions between university and non-universityeducation. With this reservation in mind, the dia-gram above shows that 14 of the quality assuranceagencies cover both the university and the non-uni-versity sectors, while 14 agencies cover only uni-versity higher education, and lastly 6 agencies coveronly non-university higher education6 .

In the questionnaire, agencies were asked to state,whether they evaluate either university or non-uni-versity education, or both. It is a very difficult taskto find a common definition of university and non-university education. Some countries have one-tiersystems, whereas some others have two-tier sys-tems wherein the non-university and the universitysectors are clearly separate, e.g. Denmark and theNetherlands. Furthermore, some of the associatedcountries make a distinction between public uni-versities – which are considered ‘real’ universities– and private universities.

The survey can, however, give an idea of thescope the agencies cover. Moreover, it explains whysome countries have only one national quality as-surance agency for higher education, while somehave more than one. For example, the Netherlandshas a two-tier higher education system, a qualityassurance agency for non-university education, onefor university education, and a meta-accreditationbody. By contrast, the UK is an example of a coun-try where the higher education sector is structuredas a one-tier system. This is reflected in the UKquality assurance model where one quality assur-ance agency covers all higher education.

However, other national characteristics may in-fluence the way a country organizes its quality as-surance. A good example is the German organisa-tion of quality assurance. There is one main accredi-tation council, which formulates the overall evalu-

ation procedures. The practical evaluation activi-ties, however, are mainly performed by regionalevaluation agencies. Finally, Denmark has one qual-ity assurance agency for all educational levels fromprimary school to university education.

National traditions, changing trends in nationalpolicy and in national policy structures can have animpact on the choice of system. However, this is anissue that requires an in-depth study of each na-tional educational system and politics.

3.2 Functions ofthe European qualityassurance agencies

With the Council Recommendation of 1998, mem-ber states were encouraged to establish quality as-surance systems with the following aims:

• To safeguard the quality of higher educationwithin the economic, social and cultural contextsof their countries, while taking into account theEuropean dimension, and the rapidly changingworld.

• To encourage and help higher education institu-tions to use appropriate measures, particularlyquality assurance, as a means of improving thequality of teaching and learning, and also train-ing in research, which is another important func-tion.

• To stimulate a mutual exchange of informationon quality and quality assurance at Communityand global levels, and to encourage co-operationbetween higher education institutions in this area.

One of the objectives of this project is to examinehow quality assurance agencies in Europe fulfil theabove-mentioned aims. In this connection, the agen-cies were asked to state their functions.

The survey shows that three main functions ofEuropean quality assurance agencies can be identi-fied:

1) Quality improvement, quality assurance in a tra-ditional sense

2) Disseminating knowledge and information3) Accreditation

6 The differentiation used in the questionnaire between universitysector and non-university sector is somewhat ambiguous , as thebasic terminology to designate different sectors in the highereducation system are not identical in all the participatingcountries. In respect of the higher education structure of eachmember state, the scope is therefore defined by the respondingagencies themselves.

Page 13: What is the Difference Between Criteria and Standards

13

ENQA Occasional Papers

The first function mentioned by the Europeanquality assurance agencies is quality improvement.It can be defined as a function, whereby higher edu-cation institutions are encouraged and helped toimprove the quality of their education throughevaluation. This is the most common function ofEuropean quality assurance agencies, involving86% of the agencies.

Another important role that the European qual-ity assurance agencies fulfil is to function as knowl-edge and information centres on quality assurancein higher education. 78% of the agencies collect anddisseminate information on quality assurance inhigher education.

Not only from a higher education institutionviewpoint, but also from a student perspective, it ispositive to observe that the European quality assur-ance agencies take their role in quality enhance-ment, collection and dissemination of informationon quality assurance very seriously. Higher educa-tion characterised by quality and transparency is anessential condition for good employment prospectsand international competitiveness of individuals.

As the boundaries between quality assurance andrecognition of academic or professional qualifica-tions are merging, transparent quality of higher edu-cation is becoming essential. This development isalso evident in the recent agreement on further co-operation between ENIC/NARIC networks andENQA.

The last one of the frequently mentioned func-tions of the European quality assurance agencies isaccreditation. It can be seen as both a method and afunction of an agency, as it includes approval deci-sions. In this context, it should be seen as the lat-ter.7 An agency can therefore both have the func-tion of quality assurance and approval of highereducation. Half of the agencies mention accredita-tion as one of their functions.

In 17% of the cases some of the agencies haveother functions than quality assurance or enhance-ment, collecting and disseminating information, oraccreditation. In these cases the agencies have theauthority to recognize and license providers of

higher education. In addition, three agencies havethe qualification framework8 as a task, and a singleagency, the Higher Education and Training AwardsCouncil (Ireland), has the recognition of nationaldiplomas as a function.

It can be concluded that the European qualityassurance agencies still perform the main functionsmentioned in the 1998 Recommendation, but thetasks have expanded and the agencies also performa wide range of additional functions, including giv-ing expert opinions on proposals for a qualificationframework, and on professorial appointments, ad-vising government on applications for authority toaward degrees, assisting universities to review theirprocedures in favour of quality assurance and qual-ity improvement, investigating and deciding on cer-tain legal matters pertaining to HE institutions, andlastly, to be the guarantee for accountability inhigher education by benchmarking9 .

3.3 Objectives of evaluations

As seen in the previous section, the function of aEuropean quality assurance agency is predomi-nantly quality assurance in the traditional sense. Onecan therefore presume that the objectives of evalu-ations are closely linked to the purpose and func-tion the agencies have within higher education in

7 In section 4 accreditation’ will be dealt with as a method.

8 Fachhochschule Council (Austria), Akkreditierungsrat(Germany) and National Qualification Authority of Ireland.9 The answers under the ‘other’ category were: “Giving opinionon drafts of qualification framework and giving opinion onprofessorial appointments.” (Hungarian Accreditation Commit-tee – Hungary), “Assisting universities reviewing procedures inplace for Q.A./Q.I.” (Higher Education Authority – Ireland),“Evaluation of institutions, programmes, research and projects.”(Comitato Nazionale per la Valutazione del SistemaUniversitario – Italy) “Accountability and benchmarking.”(VSNU Department of Quality Assurance – the Netherlands),”Promotion of the necessary studies to its performance, globalanalyses of the accreditation an certification processes, as wellas the debate and exchange of ideas and practices in the field ofteacher education”(National Institute for Accreditation ofTeacher Education – Portugal), “Information on HE to students,investigation and decision in certain legal matters pertaining toHE institutions, research and statistics and accreditation ofMasters degree in certain cases” (National Agency for HigherEducation – Högskoleverket, Sweden) and, lastly, “Advisinggovernment on applications for degree awarding powers” (TheQuality Assurance Agency for Higher Education – UnitedKingdom).

Page 14: What is the Difference Between Criteria and Standards

14

ENQA Occasional Papers

their respective countries.Also when dealing with the objectives of the per-

formed evaluation procedures quality improvementis the predominant objective. It is mentioned as anobjective in 92% of all cases, which emphasizesthat agencies see their role in safeguarding the qual-ity of higher education in a rapidly changing world,and in encouraging and assisting higher educationinstitutions to improve the quality of their education.

Accountability is mentioned as an objective in75% of all cases and in accreditation procedures itis even an objective in 86% of all cases. This doesnot come as a major surprise as accreditation is of-ten directly linked to a decision of renewal of anoperating licence depending on the outcome of theaccreditation.

However, accountability is also an objective ofthe other methods, including evaluation. MalcolmFrazer argued in his survey of 1997 that for institu-tions, to be accountable10 was an absolute condi-tion for being autonomous. The institutions mustshow accountability towards stakeholders, the gov-ernments who finance them, the students whochoose them and use their services, and the labourmarket, which wants a labour force of high quality.

Like accountability transparency is also men-tioned in 75% of all cases, and national and inter-national comparability are mentioned as an objec-tive in 59% and 41% of all cases. Ranking, how-ever, is only mentioned as an objective in two cases(3%) In a European context, it is interesting thattransparency and national and international com-parability play important roles in most of the evalu-ation types. Especially in the light that some of therecent European multinational projects, i.e. theTuning project11 and TEEP12 , emphasise these as-pects in order to encourage and facilitate studentmobility.

3.4 Organisation and funding

In the 1998 Council Recommendation, the ques-tion of autonomy and independence from govern-ment and institutions is connected to the choice ofprocedures and methods. Another indicator of in-dependence could be organisation and funding. Thissurvey shows that almost all countries have anagency co-ordinating quality assurance, and that theagency is by nature an independent organisationwith a steering body. However, institutions and gov-ernment may be represented on the board of thequality assurance agency, or contribute to the fund-ing of the agency or evaluations.

Furthermore, other educational stakeholders, suchas students, teaching staff, professional organisations,and employers, may also be represented at this level.This section examines therefore, who are representedon the board of the quality assurance agencies, andwho finances the agencies and their activities.

3.4.1 Composition of the board / council

The survey shows that institutions, industry, andrepresentatives of the labour market and govern-ment are the most common board members in Eu-ropean quality assurance. It is interesting to observethat students are also in several cases representedon boards.

Institutions are represented on the board or coun-cil in 62% of the cases. All the agencies with a boardor council have some kind of academic board mem-bers, either directly representing the institutions ofhigher education, or in their personal capacity.13 Itseems to be fairly uncommon that the board mem-bers only come from the higher education system:Only four agencies have boards solely comprisedof representatives of higher education institutions,associations of universities, or a principals’ confer-ence.14

10 In Malcolm Frazer’s survey, accountability is stated as themain purpose of external evaluation. Frazer does, however, notdistinguish between the purposes of the different methods asthis study does, so the results are difficult to compare.11 For more information see http://www.relint.deusto.es/TUNINGProject/index.htm12 For more information see http://www.ensp.fr/ISA_2003/anglais/Acc_files/ENQA_TEEPmanual_EN.pdf

13 A single exception is perhaps the State Commission for Ac-creditation (Poland), which has stated that apart from governmentrepresentatives, the agency itself elects a number of members toits board. What background these members have is not stated.14 European University Association (EUA), Vlaamse Inter-universitaire Raad (Belgium), Netherlands Association ofUniversities of Professional Education (The Netherlands) andNational Council for Academic Assessment and Accreditation(Romania).

Page 15: What is the Difference Between Criteria and Standards

15

ENQA Occasional Papers

Industry and labour market representatives areboard members in 44% of the cases. In those cases,there are also always representatives of the highereducation institutions, of principals’ conferences,of associations of universities or other personal rep-resentation by academics. Government is repre-sented in 39% of the cases, and students in a thirdof all cases. A vast majority of the cases (92%) ofstudent representation on the board cover EU orEFTA countries. The one exception from the rule isthe Hungarian Accreditation Committee. The sameapplies to the inclusion of representatives of indus-try or labour market: 94% of the cases cover theEU countries (with EFTA), the only exception be-ing the Estonian Higher Education AccreditationCentre.

3.4.2 Funding

The way funding is organised depends largely ondifferent historical backgrounds and specific na-tional education systems. In almost all the partici-pating countries the initiative to set up evaluationprocedures has either been taken by, or promotedby the government. The high priority status thatgovernments give quality assurance is also reflectedin the funding of the agencies. Governments haveobviously perceived external quality assurance asa necessity or at least as a relevant means to en-hance quality of higher education.

The questionnaire suggested central and regionalgovernment, institutions of higher education, andassociations of universities and rectors’ conferencesas possible sources of funding the agencies. Roughlyspeaking, it can be said that there are two mainfunders of evaluations: governments and institutionsof higher education. This is hardly surprising, asthey have politically the greatest need to show ac-countability. On the one hand, the government needsto ensure that educational financing leads to high-quality education. On the other hand, institutionsneed to prove to national stakeholders such as gov-ernment, students, labour market and other institu-tions that they provide high-quality education, andalso to show the international community that theyare competitive. Furthermore, as Malcom Frazerargued in his survey of 1997, to be accountable is

an absolute condition for being autonomous.All the same, government is the main source of

funding, 67% of the respondents stated that centralgovernment, and 8% stated that regional govern-ment is funding the agency. It is followed by insti-tutions of higher education, which are the secondmost common source of funding at 28% of all cases,while associations of universities and principals’conference are only funding one agency (8%).

The agencies not funded by government are al-most always funded one way or another by theevaluated institutions. Such agencies exist in Bel-gium (three agencies), France, Latvia, Romania, andthe VSNU in the Netherlands. VSNU’s Departmentfor Quality Assurance was funded by an associa-tion of universities. The three agencies in Belgiumare funded by an association of universities andPrincipals’ Conference (Vlaamse InteruniversitaireRaad), higher education institutions (VlaamseHogescholenraad (VLHORA), and lastly, the asso-ciation of universities alone (Conseil des Recteurs(CRef). The Higher Education Quality EvaluationCentre (Latvia), and the National Council for Aca-demic Assessment and Accreditation (Romania) areboth funded by institutions of higher education.Comité Nationale d’Évaluation (France) is fundedby the National Assembly and could perhaps beincluded in the category funded by central or re-gional government.

In Germany, the responsibility for culture andeducation is delegated to the federal states (Länder),including the higher education policy. The agen-cies are therefore funded jointly by the regionalgovernment (Evaluation Agency of Baden-Würt-temberg), a combination of regional governmentand higher education institutions (Zentrale Eva-luations- und Akkreditierungsagentur Hannover(ZEvA) and by donations (Akkreditierungsrat). Thesame is the case with the participating Spanishagency (Agéncia per la Qualitat del Sistema Univer-sitári a Catalunya (AQU), which is also funded bythe regional government.

Four of the participating agencies state that theirfunding come from other than the proposed catego-ries. Two of them have already been mentioned:France, where the National Assembly funded the

Page 16: What is the Difference Between Criteria and Standards

16

ENQA Occasional Papers

Comité Nationale d’Évaluation, and the Akkredit-ierungsrat in Germany whose funding is based ondonations. The two others are the Higher Educa-tion and Training Awards Council (HETAC) in Ire-land, and the Quality Assurance Agency for HigherEducation in the United Kingdom, both beingfunded by central government, higher educationinstitutions, students (HETAC), and the nationalhigher education funding councils (QAA). The last-mentioned are the bodies that provide financial sup-port to most institutions in England, Scotland andWales. These funding councils have statutory re-sponsibility to assure that public money is notwasted on unsatisfactory programmes.

Finally, the survey shows that there is little dif-ference between the agency’s source of financingand its specific evaluation activity. The same twomain financing sources emerge: Government andinstitutions. In addition, in 20% of the cases thequality assurance agency financed the activity.

3.5 Summary

The results of the survey demonstrate that since1998, European quality assurance has extended inscope and in terms of the emergence of new Euro-pean agencies. In most European countries autono-mous quality assurance agencies have been estab-lished on national or regional level. The phenom-enon is most common in the university sector (28agencies in this survey) but also the non-universitysector is being embraced by quality assurance (20agencies in this survey). Some agencies cover bothsectors; some agencies only cover one sector or theother. This difference in organisation typically findsits explanation in the structures of the nationalhigher educational systems.

The survey shows that the quality assurance agen-cies still and foremost perform quality assuranceand/or enhancement in the traditional sense dem-

onstrated in the pilot projects from 1995, but thetasks have expanded. The vast majority of the par-ticipating agencies answer that quality assurance isboth the overall main function of the agency as wellas the predominant objective of the performedevaluation activities.

But the surveys also points at a tendency that theagencies to an increasing degree give expert opin-ions and advises to both government and the insti-tutions and investigate and even decide on certainlegal matters pertaining to the HE institutions. Thisis reflected in the fact that 4/5 of the agencies men-tion ‘Disseminating knowledge and information’ asa function of the agency, and half the agencies men-tion ‘accreditation’ as a function of the agency.

The appearance of accreditation agencies andhence the performance of accreditation activities gohand in hand with an increased focus on account-ability as objective of the performed activities. 3/4of the participating agencies mention it as an ob-jective of the activities, and the same is the casewith transparency. Also comparability – nationallyas well as internationally – is a highly emphasisedobjective.

Most agencies have a board or a council, and allthese have some kind of academic board members.In 2/3 of the cases the higher education institutionsare represented among these academic board mem-bers. In half the cases labour market representativesare in the board, in 1/3 of the cases students are inthe board and in 2/5 of the cases government is rep-resented. The main source of funding of the evalu-ation activities is the government, but also the highereducation institutions are in some way or anothermentioned as source of funding in 1/3 of the cases.

There is a tendency that the board/council is moremultifaceted in the EU/EFTA countries than in theassociated countries, but the funding situation doesnot seem to differ much according to geography.

Page 17: What is the Difference Between Criteria and Standards

17

ENQA Occasional Papers

In the 1998 Status Report on the state of qualityassurance in member states, it was already clear thatthere was a diversity of methods used in qualityassurance at the national level in Europe. In the 1998study covering 18 countries, five main evaluationtypes are identified: Subject evaluation, programmeevaluation, institutional evaluation, audit and ac-creditation.

An overall aim of this project is therefore to in-vestigate the current status of the types of evalua-tion used in European quality assurance – afterPrague and before Berlin in the year 2003. In otherwords, to produce a broad review of the types ofevaluation, and to describe the methodological de-velopments in European quality assurance, in or-der to stimulate further the mutual exchange of in-formation on quality and quality assurance at Com-munity level. Furthermore, to examine, whetherthere is still consensus on what constitutes goodpractice in European quality procedures, in the formof common elements, as identified in the 1998 Com-mission study and stated in the Annex of the Coun-cil Recommendation.

Hence, this section starts out with an overviewof the types of evaluation used provided by thepresent survey, and afterwards, in the subsections4.2–4.5 the different types and methods are inves-tigated a bit further.

4.1 The evaluation landscape

One of the major questions put to the agencies inthe survey was therefore: ‘How often do you usethe different types of evaluation?’ in order to get apicture of the entire range of various types of evalu-ation used by European quality assurance agencies.Type of evaluation is defined as a method: evalua-tion, accreditation, auditing and benchmarking com-bined with one of the following categories of fo-cus: subject, programme, institution or theme. The

combination of the element-based method and fo-cus resulted in 16 different types of evaluation15, asshown in Figure 216 .

Figure 2: Types of evaluations

Eval- Accredit- Audit Bench-

uation ation marking

Subject 6 1 1 6

Programme 21 20 5 7

Institution 12 10 14 4

Theme 10 0 1 4

Quality assurance institutes were asked to tick themethods they use ‘regularly’, ‘occasionally’, ‘rarely’or ‘not at the moment’.17 The figures in the abovetable show the number of agencies carrying out thelisted types of evaluation regularly or occasional-ly. 18

The very small numbers in some of the boxesmay indicate that some combinations of method andfocus are of a very analytical kind. When lookingsolely at the combinations including ‘regular use’,the entire range of combinations used can be re-duced further as ‘evaluation, accreditation, audit andbenchmarking of a theme’ are used on a regular basis

4 Types of evaluation inEuropean quality assurance

15 For definitions see Appendix B.16 The agencies were also given the opportunity to add furthertypes under the category ‘Other’. An example of this categoryis ‘research evaluation’.17 Appendix A shows, which types of evaluation variousagencies do.18 In the question 12 of the questionnaire we asked how oftenany of the 16 different types of evaluation was used. Theagencies were allowed to tick several methods, and to tick if amethod was carried out regularly, occasionally, rarely or never.We thereby left the agency itself to decide on how to interpretthe frequency, and does not make it depend on the number ofevaluations carried out, as that may differ between small andlarge agencies. The category ‘never’ was later changed to “notat the moment” as the agencies stated in the subsequenttelephone interviews that this category seems too definitive tochoose, as they may consider implementing the method in thefuture.

Page 18: What is the Difference Between Criteria and Standards

18

ENQA Occasional Papers

by less than two agencies. This also counts for in-stitutional benchmarking, accreditation of a theme,and audit at subject level. Hence the results of thesurvey show that European quality assurance canbe identified as resting on eight main types of evalu-ations that are used on a regular basis.

Figure 3 above illustrates that accreditation andevaluation of programmes are the two types ofevaluation used most regularly in European qualityassurance, followed, in order of diminishing fre-quency, by institutional audit, institutional accredi-tation, institutional evaluation, subject evaluation,programme benchmarking and subject bench-marking.

4.2 Evaluation

‘Evaluation’ is often used as a general term for theprocedure of quality assurance. However, this sur-vey defines ‘evaluation’ as a method parallel to other

methods, such as audit etc. and uses the term ‘eval-uation type’ as an umbrella definition. ‘Evaluation’in this context is therefore combined with differentfocal points, such as subject, programme, institu-tions, and theme, defined as a type of evaluation.

• The evaluation of a subject19 focuses on the qual-ity of one specific subject, typically in all theprogrammes in which this subject is taught.

• The evaluation of a programme focuses on theactivities within a study programme, which inthis context is defined as studies leading to a for-mal degree.

• The evaluation of an institution examines thequality of all activities within an institution, i.e.organisation, financial matters, management, fa-cilities, teaching and research.

• The evaluation of a theme examines the qualityor practice of a specific theme within educatione.g. ICT or student counselling.

In the member states that participated in the pilotproject, programme and institutional evaluationswere the basic ways of evaluating higher educa-

Figure 3: Frequency of the types of evaluation used on a regular basis19

Not at the moment

Rarely

Occasionally

Regularly

Subj

ect e

valu

atio

n

Prog

ram

me

eval

uatio

n

Inst

itutio

nal e

valu

atio

n

Prog

ram

me

accr

edita

tion

Inst

itutio

nal a

ccre

dita

tion

Inst

itutio

nal a

udit

Subj

ect b

ench

mar

king

Prog

ram

me

benc

hmar

king

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

19 A subject is for example the subject ‘chemistry’ within thestudy programme of medicine.

Page 19: What is the Difference Between Criteria and Standards

19

ENQA Occasional Papers

tion. These kinds of evaluations are still widely used.According to Figure 3 ‘evaluation of programme’is among the most frequently used evaluation typesin European quality assurance as 53% of the agen-cies do this type of evaluation on a regular basis.Institutional evaluation is less widespread, as only22% of the agencies are using it regularly.

Evaluation of programmes20 is a type of methodmainly used by the Nordic, Dutch or English-speak-ing agencies. Comparing the use of the methods inthe university and non-university sector, there ismore focus on programmes than on institutions inthe non-university sector. This is probably due tothe very strong vocational or professional empha-sis of the programmes in the non-university field.The non-university sector also has a tradition ofprivate, professional accreditation of programmes,e.g. in engineering.

Evaluation of institutions examines the qualityof all activities within an institution, i.e. organisa-tion, financial matters, management, facilities,teaching and research. According to the EuropeanUniversity Association and the French CNE, forexample, 55% of the agencies are not using thismethod at present.

Subject evaluation focuses on the quality of aspecific subject, typically in all the programmes inwhich this subject is taught. This type of evalua-tion is used regularly by 14% or occasionally by3% of the agencies respectively21 . 14% of the re-spondents stated that they use rarely and 69% notat the moment this type of evaluation. However,due to the ambiguous nature of the term ‘subject’there is some degree of error in these statistics.

In the 1998 Status Report it was concluded thatin an historical perspective, the first national evalu-ation procedures had a single focus, whereas theagencies in the following cycles expanded their fo-

cus of evaluation activities. This development canalso been observed in this study. In the evaluationof university education, the emphasis has movedfrom programme-oriented in the 1990s to a broaderfocus on the subject, programme, and institutionallevel. Several agencies now combined several fo-cal points in their evaluations; for example, anagency that traditionally used to evaluate pro-grammes may now also evaluate institutions.

4.3 Accreditation

Accreditation is another widely used method inEuropean quality assurance. It is especially com-mon in the associated countries, where this methodhas been a traditional way of assuring the qualityof higher education. Moreover, countries such asGermany, Norway, and the Netherlands have sincethe completion of the survey decided that this shouldbe the main type of quality assurance of higher edu-cation.

This study shows that accreditation of pro-grammes is used on a regular basis by 56% of therespondents22 . It is an evaluation type primarily usedby the German-speaking agencies, by agencies inthe associated countries, by the Dutch agencies, butalso by Nordic and southern agencies. Accredita-tion of institutions is done on a regular basis by 22%of the agencies, e.g. by German, Austrian and somein the associated countries, although not used atpresent by 67%.

Hence, the term accreditation is ambiguous.When looking at the accreditation process, accredi-tation is usually mixed with evaluation. As it iselaborated further in Chapter 5, evaluation and ac-creditation include the same methodological ele-ment, the so-called four-stage model. It is, however,important to note that accreditation is not the sameas evaluation. In the above-mentioned 2001 ENQAreport on accreditation-like practices, accreditationwas defined as having the following characteris-tics:

20 In this context the distinction between ‘programme’ and‘subject’ was ambiguous. In the survey we defined ‘programme’as ‘studies leading to a degree’ whereas ‘subject’ would be ‘anelement of the programme’ e.g. the subject chemistry within theprogramme of medicine. However, the terminology ‘subject’seems to be to narrow in the meaning of ‘part of a programme’.In countries such as Sweden and Germany the entire field/subject is under scrutiny as one particular programme within afield is only a small part of many options.21 Three % is equal to one case.

22 N=36 cases, 19 out of 36 doing it on a regular basis, 3 (8%)occasionally, 1 (3%) rarely and 13 (36%) not at the moment.

Page 20: What is the Difference Between Criteria and Standards

20

ENQA Occasional Papers

• Accreditation recognizes (or not) that a highereducation course, programme or institution meetsa certain standard, which may be either a mini-mum standard, or a standard of excellence.

• Accreditation therefore always involves a bench-marking assessment.

• Accreditation findings are based on quality cri-teria, never on political considerations.

• Accreditation findings include a binary element,being always either yes or no.

Furthermore, the 2001 Report states that whereasaccreditation always refers to a standard, evalua-tions may or may not do so, or do so only to someextent. In the survey, a difference is made betweenthe accrediting process that precedes the launchingof a new programme (ex ante), and the accredita-tion control applied to established ones. The accredi-tation procedures in associated countries involvedboth an accreditation of existing programmes andof pending or planned programmes. Therefore someof the associated countries make a clear distinctionbetween ex-ante and ex-post accreditation. The ac-creditation process is seen as a dual process,whereby one body of the agency evaluates andmakes an assessment according to pre-definedstandards, and another body (e.g. accreditation com-mission) takes the final decision whether to approvethe programme or not. Many agencies in the asso-ciated countries also accredit institutions, when aninstitution must be approved before it can establishor offer new programmes.

In Germany, newly introduced programmes areaccredited. The practice is introduced as a meansto control the quality of new degrees, allowing theinstitutions flexibility in creating new programmes.The existing national framework was considered tobe an obstacle to development of new and innova-tive programmes. The Germans make a clear dis-tinction between the functions of evaluation andaccreditation, as they serve different purposes.

Accreditation can be directed at other levels thanprogramme and institution – also agencies them-selves can be the objects of the accreditation proce-dure. One of the main tasks of the German Akkredi-

tierungsrat is to accredit other agencies. However,they are also allowed to undertake accreditation ofprogrammes at the request of the Länder. Anothersimilar development can be observed in other partsof Europe. Very recently, a National AccreditationOrganisation (NAO) has been established in theNetherlands. Its mandate is to verify and validateexternal assessments performed by QA agencies.

4.4 Audit

An audit can be defined as a method for evaluatingthe strengths and weaknesses of the quality assur-ance mechanisms, adopted by an institution for itsown use in order to continuously monitor and im-prove the activities and services of a subject, a pro-gramme, the whole institution, or a theme. As theENQA report Institutional Evaluations in Europeof 2001 emphasises, the fundamental issue in qual-ity auditing is how does an institution know thatthe standards and objectives it has set for itself arebeing met?

The present study shows that the most commontype of audit is ‘institutional audit’, with a 28% regu-lar usage ratio. Audit also comes third among themethods used on a regular basis in European qual-ity assurance. Institutional audit is used regularlyby all the Irish and British agencies, for example,and some of the agencies in Nordic and associatedcountries. 11% of the agencies use institutional au-diting occasionally, while 56% does not use it. Au-diting of programmes, subjects and themes is notvery common in European quality assurance.

4.5 Benchmarking

In the same way as the term ‘accreditation’, bench-marking may be discussed as a method or an ele-ment of evaluation. In the present study, bench-marking is defined as a method, whereby a com-parison of results between subjects, programmes,institutions or themes leads to an exchange of ex-periences of best practice.

Page 21: What is the Difference Between Criteria and Standards

21

ENQA Occasional Papers

The ‘best practice’ element common to most defi-nitions of benchmarking implies that whereas ac-creditation procedures are typically based on mini-mum standards or threshold criteria, benchmarkingprocedures are typically based on excellence crite-ria.23 It is, however, possible to do benchmarkingwithout any explicit criteria at all. It should be notedthat the term ‘benchmark’ may cause some confu-sion, as the ‘subject benchmarks’ employed, forinstance, by the Quality Assurance Agency forHigher Education in the UK are a set of criteria usedas a frame of reference in connection with any evalu-ation procedure, which does not necessarily includeany comparative element.

This study shows that several agencies do ex-periment with benchmarking in some way or an-other, but it is probably too early to conclude any-thing about common procedures.

The results are that the most common form ofbenchmarking is ’programme benchmarking’ whichin 14% of the cases are used on a regular a basis,whereas in 75% of the cases it is not used at all as amethod. Subject benchmarking is employed by theresponding agencies regularly or occasionally by9%, while it is not applied in 80%of the cases.Benchmarking of institutions and of themes are rare.Furthermore, it should be noted that none of theagencies carried out benchmarking as their primaryactivity, and only one agency, the Netherlands As-sociation of Universities of Professional Education,mentioned benchmarking (of programmes) as theirsecond most used type of evaluation

4.6 Variety of evaluationtypes in Europeanquality assurance

One of the major conclusions of this study must bethat at a national level the European quality assur-ance agencies use a variety of evaluation types.Where the 1998 study showed that evaluation agen-cies were sticking to the evaluation type (combina-tion of method and focus) that they had tradition-

ally used, the picture today is very different.Not only do agencies seem to have extended their

focus of evaluations. Agencies also tend to com-bine different types of evaluation, such as institu-tional auditing, with programme evaluation. Thediagram in Appendix A shows that the majority ofthe agencies used normally more than one methodon a regular basis. The agencies also tend to be com-mitted to one or two methods, which are then usedsystematically throughout an area of higher educa-tion.

However, on the basis of the questionnaire re-sponses it is not possible to deduce, when a certainevaluation type is used. This is definitely an inter-esting question to examine more closely, but it callsfor a qualitative in-depth study of the evaluationhistory in various countries.

It has often been discussed in the ENQA contextthat many of the European quality assurance agen-cies have been through a ‘playing or testing’ phaseand that many of them are now changing their evalu-ation regimes by going into the second establish-ment phase. This hypothesis is difficult to confirmon the basis of the questionnaire. However, in thequalitative responses to the questionnaire, concern-ing the future strategies of the agencies, it becameclear that some of the agencies are at a turning pointin their history. This is the case in the Netherlandsand Norway, for example, where the governmentshave decided on new evaluation strategies that havehad organisational consequences for the agencies.The UK is a further example of this, as a new evalu-ation regime is being implemented. Other countries,including Denmark, are preparing and implement-ing new evaluation strategies.

Parallel to this development among older qual-ity assurance agencies there is a similar trend inother parts of Europe where new quality assurancestructures and agencies are established. This is es-pecially true in the German-speaking part of Eu-rope, with more regional quality assurance agen-cies being established in Germany, and a Swissagency having recently been established.

23 See section 6, Criteria.

Page 22: What is the Difference Between Criteria and Standards

22

ENQA Occasional Papers

4.7 Summary

The survey shows that European quality assurancecan be identified as based on eight main types ofevaluation. The survey also demonstrates that mostagencies carried out several types of evaluation. Itis shown that the principal types24 of evaluation usedin European Quality Assurance are ‘accreditationof programmes’ and ‘evaluation of programmes’.The majority of the participating agencies use bothon a regular basis.

In general programmes are the most chosen fo-cus of the evaluation activities. This is especiallycharacteristic for the field of non-university educa-tion, whereas institutions are coming more into fo-cus in university education. This is probably due tothe very strong professional emphasis of the pro-grammes in the non-university field.

The most chosen method is still the traditionalevaluation that is used in combination with differ-ent foci regularly or occasionally in 49 cases. Andin contrast to earlier the tendency is that one agencyvery often uses evaluation on different levels, or inother words combines evaluation as a method withdifferent foci.25 Nevertheless, accreditation as a

method comes close with 31 cases of regular oroccasional use. Accreditation is most used in theassociated countries and in the Dutch and German-speaking countries. There does, however, seem tobe very big variations in the procedures of accredi-tation, and the method could be a theme for furtherinvestigations.

‘Institutional audit’ forms an exception. Audit ishardly ever used on subject and programme level,or in combination with ‘theme’ as a focus. On theother hand, the combination of audit and institu-tion is the third most popular type of evaluation usedand it is applied primarily in the English-speakingcountries.

Finally the survey shows that several agenciesexperiment with benchmarking – often combinedor integrated with other methods, but as an inde-pendent method it has not really come through yet.

24 The term ‘type of evaluation’ comprises a combination of thefocus of an evaluation and the method used.25 Hence the number of 49 that refers to evaluations is onlyinteresting compared to other numbers from the same figure,as each agency may have ticked ‘evaluation’ up to four times incombination with different foci.

Page 23: What is the Difference Between Criteria and Standards

23

ENQA Occasional Papers

In this chapter practical and methodological issuesconnected to European evaluation procedures willbe dealt with. The structuring principle will be thefour-stage model, applied in connection with thepilot projects:

• Autonomy and independence in terms of proce-dures and methods concerning quality evaluationboth from government and from institutions ofhigher education,

• Self-assessment,• External assessment by a peer-review group and

site visits, and• Publication of a report.26

The four-stage model is today generally acceptedas the shared foundation of European quality as-surance and it has a prominent place in as well theCouncil Recommendation of 1998, as in the crite-ria for ENQA membership27.

The autonomy and independence of an agency,as dealt with in section 3.3 above, is a complex is-sue. One indicator of independence that has not beendealt with yet comprises the procedures for appoint-ing members of a peer-review group. In the strictsense of the term, peers are academic professionalsrepresenting the academic field being evaluated. TheStatus Report of 1998 states that “most often theyare involved in the evaluation when the self-evalu-ation reports are delivered”.

The results of the present survey, however, givereason to believe that the concept of peers has beenextended in scope in at least two ways. Firstly, sev-eral countries have adopted a multi-professionalpeer concept instead of the single professional con-cept presented above. Therefore, the term ‘expertpanel’ is used in this report. Secondly, in some coun-tries the experts do much more than conduct site

visits. Hence, the experts are dealt with separatelyin sub-sections 4.1 and 4.2 before the methodologi-cal aspects of the four stages, i.e. self-assessmentand the practical circumstances related to site visitsare assessed.

These two elements typically form the documen-tation in external evaluation procedures. Thus al-most all agencies participating in Frazer’s surveyof 1997 answered that self-evaluation and site vis-its were part of an external evaluation procedure,and in the Status Report self-evaluation is called “akey piece of evidence”.

The Status Report also mentioned that additionaluser surveys are carried out in some countries, e.g.Denmark and Finland, and actually the present sur-vey suggests that not only surveys, but also variouskinds of statistics are used as documentation ofevaluation procedures in several countries. Conse-quently, four sources of documentation are dealtwith: Self-evaluation (5.3), site visits (5.6), surveys(5.5) and statistical data (5.4). Finally, in sub-sec-tion 4.7 the results of the last stage are presented:The publication of a report, and also added here arecomments on the follow-up of the evaluation pro-cedures.

The form and content of these elements, and notleast their mutual relation, vary considerably. Thisvariation has mainly to do with the national educa-tional systems being evaluated. This point was iden-tified already in the pilot project of 1995, and con-firmed in the Status Report. The type of evaluationprocedure (method and focus as described above),is another parameter that may cause variety. Froman overall perspective, however, the choice of evalu-ation procedure does not seem to influence themethodological elements fundamentally, and hencethe type of evaluation procedure will not be a struc-

5 The Four-Stage Model

26 The wording is from the European Pilot Project. The wordingmay differ in other representations of the model.27 See also section 3.3.

Page 24: What is the Difference Between Criteria and Standards

24

ENQA Occasional Papers

turing principle for the presentation of methodo-logical elements, and they will be dealt with, if rel-evant.28

5.1 Compositionand appointment ofthe expert panel

The procedures for appointing experts could be oneindicator (among others) of independence. If, forinstance, the evaluated institutions themselves in-fluence the appointment of experts there would bereason to doubt the independence of the QA agencyfrom the higher education institutions. But it fol-lows from this that there is an essential distinctionbetween the right to propose or nominate expertson one hand and the right to finally decide who arechosen. Nomination and appointment are thereforeboth elements of the survey.

The respondents were allowed to tick more thanone answer, in case more agents are involved in thenomination or appointment. That is primarily thecase in connection with the nomination of experts(on average 1.4 ticks per case), whereas one agent(1.1 ticks per case), typically the QA agency, is re-sponsible for the final appointment.

The overall picture is that nomination and ap-pointment of experts go very much hand in hand,and that the QA agency is the main agent in con-nection with the nomination, and also plays a ma-

jor role in the appointment of experts. From an ‘in-dependence-point-of-view’ this is of course posi-tive.

The differences between nomination and appoint-ment are primarily reflected in the role of the insti-tutions of higher education, who nominated expertsin 29% of all cases, whereas they do not take partin the final appointment. Their role in the nomina-tion process is more prominent in the EU/EFTA-countries than in the associated countries, wherethe Lithuania Centre for Quality Assessment is theonly instance of HE institutions nominating experts.

It should be taken into consideration that someagencies may not distinguish between QA staff andexperts. Experts may be employed at QA agency,and/or QA staff may conduct the evaluation proce-dure without external assistance. The latter may bethe case of the Inspectorate of Higher Education inthe Netherlands.29

When asked directly if QA staff are members ofthe expert panel, the Accreditation Committee ofthe Czech Republic is the only agency from the as-sociated countries that answers yes, whereas halfthe agencies from EU/EFTA-countries ticked QAstaff as members of the expert panel.30

The table on the next page (Figure 5) is a com-prehensive presentation of the experts used in rela-tion to the methods employed.

The table, of course, says nothing about the typi-cal composition of an expert group, nor about itssize, which may differ from agency to agency andfrom one evaluation type to another. The impres-sion from the follow-up conversations with theagencies is that expert groups are typically smallerand more homogeneous when conducting accredi-tation. This impression is reinforced by the fact thateach agency ticked on average 3.3 categories inconnection with evaluation, but only 2.3 categoriesrelated to accreditation.

28 The methods are derived from the question in the question-naire concerning the primary and secondary functions of theagencies. In 10 of the 36 cases, the agency has only onefunction. Hence the complete number of methods involved inthe statistics is 62. The distribution of methods was: Evaluation:31 cases, accreditation: 22 cases; audits: 6 cases, bench-marking: 1 case, and ‘other’: 2 cases. The method is part of the type of evaluation procedure. Thefigures would be much too small if divided according to focusas well. Thus evaluation, accreditation, and audit may concern asubject, a programme, an institution or a theme. Benchmarkingis not presented, as only one agency, The Netherlands Associa-tion of Universities in Holland carry out benchmarking (ofprogrammes) as their secondary function. No agency hasbenchmarking as a primary function. VSNU Department ofQuality Assurance in Holland and Conseil des Recteurs inBelgium do ‘other activities’ that were not included in thestatistics. Consequently the column ‘total’ in the followingdiagrams do not add up to the sum of the three present methods.

29 This is the interpretation of the fact that the Inspectorate ofHigher Education in the Netherlands only ticked one answer: ’QAstaff’ to the question “Who are the members of the external expertpanel?”30 This may be due to QA staff fulfilling very differentfunctions in the expert panel, as discussed in sub-section 4.2.

Page 25: What is the Difference Between Criteria and Standards

25

ENQA Occasional Papers

Parallel to this, agencies in the EU countries tendto tick a greater variety of categories than those inthe associated countries. In practice, this means thatthe EU countries are more used to having students(25%) and employers (43%) as members of theirexpert panels than the associated countries are (13%and 19% respectively). The general picture is that

whereas students to a large degree participate in self-assessment and are being interviewed at the sitevisits33, their participation in expert panels is stillrather limited.34

Another interesting aspect of the table above isthe frequent participation of international experts –most of all in evaluations, but also in general. It isinteresting to note that no less than 18 agencies thatused international experts also answered that theexperts write the final reports and that the reports

Figure 4: Nomination and appointment of experts31

31 ‘Other Agents’ mentioned in connection with the nominationof experts are: Rector's Conference (Norway Network Council),professional Organizations (Lithuanian Centre for QualityAssessment in Higher Education and HBO-raad in the Nether-lands), other experts in the field (Council of EducationalEvaluation-Accreditation in Cyprus), previous chairmen ofcommittees (HBO-raad in the Netherlands) and ‘self-nomina-tion’ (Quality Assurance Agency in United Kingdom). ‘Otheragents’ involved in the appointment of experts were theAccreditation Commission and Higher Education Council(Higher Education Quality Evaluation Centre).

Who nominates/appoints the members of the external expert panel?

0

10

20

30

40

50

60

70

80

90

100

Central/regionalgovernment

Institutions QA agency Other agents

Nomination ofexternalexperts

Appointmentof externalexperts

%

Figure 5: Composition of the expert panel combined with the method used32

Who are the members of the Evaluation Accreditation Audit Total

external expert panel?

National experts representing areas of focus 87% 62% 75% 77%

National experts representing institutions 13% 29% 100% 17%

International experts 81% 52% 100% 73%

Students 23% 14% 25% 22%

Graduates 13% 0% 0% 7%

Employers 45% 19% 50% 37%

Staff members of the QA agency 39% 33% 75% 40%

Professional organisations 19% 5% 0% 13%

N= 31 21 4 60

32 At present, The Nordic Network is conducting a project onstudent participation in evaluation procedures.33 As the respondents were given the opportunity to tick severalanswers to each question, a vertical calculation will not add upto 100%.34 See sub-sections 4.3 and 4.4.

Page 26: What is the Difference Between Criteria and Standards

26

ENQA Occasional Papers

are typically written in the national language. Theremay, of course, be several explanations to this. Oneof them may be that international experts typicallycome from the neighbouring countries or from coun-tries with a common national language. The use ofinternational experts needs probably further study.

5.2 Functions andresponsibilities ofthe expert panel

The figure below illustrates the division of labourbetween the expert panel and the quality assuranceagency. The evaluation process has been dividedinto phases, and the succession of the phases mayof course vary from one evaluation procedure toanother. It may be justified, however, to see the x-axes as time-axes, and in that regard the figure givesa clear picture of the agency starting up the proc-ess, and the experts taking gradually over.

The crucial point, at which the experts take overto a large degree, is the site visit. In 30 cases, ex-perts are involved in the ‘planning of site visits’ (in7 cases without assistance from the agency)35 , andin 23 cases the experts are involved in the ‘prepara-tion of guidelines for the site visits’ (in 5 cases with-out assistance from the agency). But the Ministryof Education, Science and Culture, Division ofEvaluation and Supervision in Iceland, the NationalEvaluation and Accreditation Council in Bulgaria,and the National Institute for Accreditation ofTeacher Education in Portugal, are examples ofagencies, where the experts are involved in the proc-ess from day one.

Reading Figure 6 from the left to the right ‘Prepa-ration of the evaluation concept’ is the first phase,during which experts in some instances act without

Figure 6: Division of labour between the expert panel and the agency

35 The actual conduct of the site visits are discussed in sub-section.

Who performs the following functions in the evaluation?

0

10

20

30

40

50

60

70

80

90

100 The experts

Both

The Agency

%

Choi

ce o

f bas

ic m

etho

dolo

gy

Prep

arat

ion

of g

uide

lines

for s

elf-e

valu

atio

n

Prep

arat

ion

of e

valu

atio

n co

ncep

t

Cont

act w

ith th

e in

stitu

tions

Plan

ning

of t

he s

ite v

isits

Prep

arat

ion

of g

uide

lines

for t

he s

ite v

isits

Writ

ing

of th

e re

port

Page 27: What is the Difference Between Criteria and Standards

27

ENQA Occasional Papers

assistance from the agency. That is the case withthe Ministry of Education, Science and Culture,Division of Evaluation and Supervision in Iceland,and the Estonian Higher Education AccreditationCentre.

The National Institute for Accreditation ofTeacher Education in Portugal, and Vlaamse Hoge-scholenraad in Belgium, are examples of the ex-pert panel taking completely over from the phaseof ‘contact with the institutions’ for the rest of theevaluation phases. In more than half the cases, how-ever, the agency is involved in all phases, includ-ing the final drafting of the report, and in 16% ofall cases they write the report without the assist-ance of experts. By contrast, experts write the re-ports without the assistance of the agency in 49%of all cases. Thus, the agency and the experts draftreports jointly in 35% of all cases. This coopera-tion may cover a variety of roles, ranging from theagency functioning as secretary for the experts, to

the agency having sole or shared responsibility forthe content of the report.

On average in 45% of all cases, the experts haveexclusive responsibility, on average in 40% of allcases QA agency and the expert panel shared theresponsibility, and in 15% of all cases the QA agencyhas exclusive responsibility. Figure 8 below showsthat the division of responsibility is approximatelythe same with regard to ‘description and analysis’,‘conclusions’ and ‘recommendations’.

Figure 8 is divided into EU/EFTA-countries andassociated countries, as the survey shows a cleartendency of the agency having or sharing more re-sponsibility for the content of the report in EU/EFTA-countries than in the associated countries.Other aspects, such as the scope, method and focusof the evaluation procedure do not seem to influ-ence the division of responsibility.

Figure 6 shows the division of labour during theactual performance of the different phases of an

Figure 7: Division of Responsibility

Who is responsible for the different parts of the evaluation?

0

20

40

60

80

100

%

The experts

Both

The Agency

Desc

riptio

n an

d an

alys

is -

EU/E

FTA

Desc

riptio

n an

d an

alys

is -

Asso

ciat

ed

Conc

lusi

ons

- EU/

EFTA

Conc

lusi

ons

- Ass

ocia

ted

Reco

mm

enda

tions

- EU

/EFT

A

Reco

mm

enda

tions

- As

soci

ated

Page 28: What is the Difference Between Criteria and Standards

28

ENQA Occasional Papers

evaluation, whereas Figure 7 shows the division ofresponsibility. If one looks at the two figures to-gether, the following picture emerges: The agencytypically performs the majority of the functions ofan evaluation procedure, and the experts have typi-cally primary responsibility.

Compared to earlier reports and investigationsmentioned above, this picture gives the impressionof a development, where the workload of the ex-perts has been reduced a little. This could be seenin relation to the recommendation in the StatusReport: “As more and more countries and institu-tions initiate evaluation procedures the need forexperts grows, and good experts will to an increas-ing extent be in demand. In this situation a divisionof labour between an evaluation agency and theexperts that reduced the workload of the expertsmay be a decisive factor in the recruitment of thelatter.”

Another possible explanation for the division oflabour and responsibility could be meeting the needfor more professionalism in the performance of thefunctions of an evaluation procedure, as the fieldof evaluation is expanding and the demand for trans-parency and comparability both nationally and in-ternationally is increasing considerably.

5.3 Self-evaluation

Since almost all agencies answer that quality im-provement is an objective of their primary activity,self-evaluation is, not surprisingly, still an element

in most evaluation procedures: Self-evaluation isincluded in 94% of the evaluations, but only in 68%of the accreditation processes. This could be ex-pected, as the process of self-evaluation is typicallyseen as having a dual purpose in evaluation proce-dures: On the one hand, the process of self-evalua-tion ends up with documentation, on the other hand,it forms the basis for continuous improvement.

However, five agencies answer that self-evalua-tion groups are not used in their primary activity;four of them carried out accreditation of pro-grammes or institutions as their primary activity.The last one, QAA in Great Britain, is working on anew ‘lighter touch’ concept excluding self-evalua-tion.

An analysis of the agencies that does work withself-evaluation groups shows the composition of thegroups as shown in Figure 8 (due to the low values,the percentages should be taken with certain reser-vations).

The overall picture is that management and teach-ing staff are usually part of the self-evaluation group,whereas graduates rarely participate. The partici-pation of administrative staff and students vary con-siderably, and for the latter there seems to be a con-nection to the method used: Students are usuallyrepresented in evaluations, but rarely in accredita-tion procedures. In addition, the survey shows atendency of students being more represented in theEU countries (62%) than in the associated coun-tries (47%), and more represented in the area ofuniversity sector (67%) than in the area of non-uni-versities (46%). The only indicator of diversity inthe representation of administrative staff seems tobe the status of the country: Representation by ad-ministrative staff is more common in the associ-

Figure 8: Composition of the self-evaluation group combined with the method36

Who, typically, represent the institution Evaluation Accreditation Audit Total

in the self-evaluation group?

Students 72% 25% 100% 59%

Graduates 7% 13% 25% 12%

Management 83% 56% 100% 75%

Teaching staff 79% 69% 100% 77%

Administrative staff 59% 69% 50% 61%

N= 29 16 4 51

36 As the respondents have the opportunity to tick severalanswers to each question, vertical calculations do not add up to100%.

Page 29: What is the Difference Between Criteria and Standards

29

ENQA Occasional Papers

ated countries (73%) than in the EU countries(56%).

5.4 Statistical data

In this report statistical data is understood as pre-existing data, typically output data on the studentdrop-out rate, average time of study, staff numbersetc. In the Status Report, quantitative data were pre-sented in connection with the self-evaluation report.As a supplement to the qualitative reflections, theinstitutions were asked to provide a range of quan-titative data in the self-evaluation report. And inmost cases, the examination of documentary evi-dence, amongst other statistical data, was an im-portant part of site visits37. Therefore statistical datahas not been dealt with as a separate methodologi-cal element. There are, however, cases where sta-tistical data provided by external agents are takeninto consideration. In Denmark, the use of labourmarket statistics is an example.

In all the survey data, there are only two exam-ples of statistical data not having been used, whileevery participating agency used statistical data inconnection with their primary activity. The varioustypes of statistical data are distributed as shown inFigure 9.

The total numbers shows that data on studentsand on teaching staff are used most frequently. It isremarkable that labour market statistics are used farmore often in connection with evaluation than withaccreditation. One explanation may be that relevant

data does not exist yet in connection with ex anteaccreditation. Labour market statistics could be animportant indicator of relevance in connection withthe approval of new programmes. Neither the sta-tus of the country, nor the scope of the agency (uni-versity/non-university) seems to influence the useof labour market statistics.

The variety in the use of financial key figuresseems to be related to both the status of the countryand the scope of the agency: Key figures are usedin 64% of cases in the university sector, and in 66%of cases in the EU countries, but only in 44% ofcases in the non-university sector, and in 44% casesin the associated countries. Not surprisingly, the useof financial figures is also strongly represented inthe evaluation procedures with an institutional fo-cus, as is data on administrative staff: Financial keyfigures and data on administrative staff are used in4 out of 5 evaluations, accreditation processes oraudits at institutional level.

5.5 Supplementary surveys

The borderline between statistical data and supple-mentary surveys is not completely clear. If statisti-cal data are defined as pre-existing data, supple-mentary surveys will typically be produced in con-nection with an evaluation procedure; and they canbe either quantitative or qualitative (questionnaires,interviews etc.). The choice of using supplemen-tary surveys could partly be seen in relation to theamount and reliability of existing statistical data:

37 See Figure 10 later in this chapter.38 As the respondents have had the possibility to tick moreanswers to each question, a vertical sum will not give 100%.

Figure 9: Types of statistical data combined with the method38

What sort of statistical Evaluation Accreditation Audit Total

data is used?

Data on students 100% 91% 75% 92%

Financial key figures 61% 48% 100% 58%

Data on administrative staff 61% 71% 75% 65%

Data on teaching staff 90% 95% 100% 90%

Labour market statistics 61% 38% 25% 50%

N= 31 21 4 60

Page 30: What is the Difference Between Criteria and Standards

30

ENQA Occasional Papers

When is it felt necessary to supplement the exist-ing data with more data or with data of qualitativekind?

These deliberations concerning definitions of‘statistical data’ and ‘supplementary surveys’ origi-nate in this survey. The agencies do not have ac-cess to any definitions when filling in the question-naires. Therefore the present data should be takenwith all possible reservations.

As mentioned on page 24, the use of supplemen-tary surveys is not as common as the use of othermethodological elements. Nevertheless about halfthe participating agencies (53% of the cases) makeuse of some kind of supplementary surveys. Thenumber, however, hides the fact that supplemen-tary surveys are rarely used in connection with ac-creditation: 81% respondents said that surveys arenot used, whereas surveys are part of 77% of theevaluations. There are minor differences related tothe status of the country and the scope of the evalu-ation procedure; surveys tend to be used more inthe EU countries than in the associated countriesand more in the university sector than in the non-university sector.

Surveys of graduates are used most frequently.Neither the status of the country, nor the scope ofthe evaluation is of much significance in this re-gard. The fact that surveys of graduates are the mostcommon kind of surveys could be due to graduatesbeing most readily accessible in that way. But thesame could be said about employers, who as a groupare used least of all in the surveys.

There seems to be a connection between the useof surveys of employers and the use of labour mar-ket statistics. 12 of the 13 respondents, who usesurveys of employers, also used labour market sta-tistics. This connection indicates that surveys ofemployers are used in order to assess the employ-ability of candidates. Unfortunately there was noitem in the questionnaire dealing with the employ-ability or the relevance of programmes, subjects orinstitutions, and the questionnaires show no clearconnections between the use of surveys (neither ofemployers nor of surveys in general) and the as-pects assessed in an evaluation.

The questionnaire shows no connection betweenthe use of surveys and the objectives of evaluationprocedures either, so a central question still remainsunanswered: How and to what extent do supple-mentary surveys contribute to the evaluation pro-cedures? The emphasis in this question has to dowith the expenses connected to the use of surveysas documentary evidence.

Whether the expenses are in the form of time ormoney depends partly on whether the agency itselfconducts the survey, as in 44% of the cases, or anexternal agency conducts the survey. The latter,however, is true only in respect of three agencies:Zentrale Evaluations- und AkkreditierungsagenturHannover in Germany, Fachhochschulrat in Aus-tria, and the Danish Evaluation Institute. The mostcommon model is, found in 20 of out of 32 cases,that the institutions conducted the surveys.39

5.6 Site visits

Site visits are with some justification regarded as auniformly used follow-up on the self-evaluationreports. There may be a difference depending if thecentral focus is on control or elaborating of the con-tent of the reports, but the two sources of documen-tary evidence seem to be closely connected. There-fore it is interesting that site visits are still standardin almost all evaluation procedures, even thoughthe analysis above shows a tendency to self-evalu-ation being less widely used than earlier – mainlyin accreditation procedures. Only in two cases sitevisits are not used: In Norway, during accreditationof programmes (self-evaluation is not used, either),and in the Netherlands, when the HBO-raad doesbenchmarking of programmes (self-evaluation wasused, site visits are not). This means that in 8 cases(distributed over 6 agencies in 5 countries) site vis-its are conducted without a preceding self-evalua-tion.

The average length of site visits in these 8 casesis two days. The total average length is also two

39 This could be an indication of the blurred border line betweenstatistical data and supplementary surveys.

Page 31: What is the Difference Between Criteria and Standards

31

ENQA Occasional Papers

days; so the length does not indicate that site visitsin these cases can have a compensatory function.In general, the site visits last from 1 to 5 days, with2 days as the most common duration. Evaluationprocedures at the institutional level lasted typicallylonger, especially the audits at institutional level inIreland, Italy and Sweden, where the average lengthis 3.3 days.

As shown in sub-section 5.5 above, experts havevarying functions and responsibilities. Their corefunction, however, seems to be to conduct site vis-its. In 83% of all cases (49 cases), the expert panelconducts the site visits. In 15 cases (25% of all cases)QA agency staff accompanies them. It is, however,notable that the agency conducts the site visits with-out external experts in 15% (9) of all cases. WithRomania as the only exception, all these 9 casesare in EU countries, but evenly distributed amongvarious methods.

The content of site visits distributed among vari-ous methods is shown in the table below (Figure10).

The overall picture is mutual agreement on theelements constituting site visits: Almost every par-ticipating agency worked with interviews, tours ofthe facilities, with final meetings with the manage-ment, and the examination of documentary evi-dence. There are no great differences related to themethod used. Of course there may be great differ-ences in the content of each element, but that is notthe case of who is interviewed at the site visits.Everybody agrees that teaching staff and studentsshall be interviewed. Whereas the presence of stu-

dents in self-evaluation groups, and not least in theexpert panel, is perhaps in some cases controver-sial, there are no disagreements on the relevance ofstudents’ statements during site visits.

In half the cases, graduates are interviewed aswell; status of the country, scope or type of theevaluation procedure are not influenced by gradu-ates being interviewed or not. In 69% of all cases,administrative staff is interviewed. The interviewswith administrative staff are most common in theevaluation procedures at the institutional level, mostof all in audit procedures.

The most controversial element of the site visitsseems to be classroom observations, which are usedin 25% of the cases. The occurrence of classroomobservations on site does not depend on the scopeor type of evaluation procedure, but it does seem tobe a much more frequent element in the associatedcountries than in the EU countries. In 50% of the16 cases in the associated countries, classroom ob-servation is used, the figure for similar cases in theEU countries being only 17% of the 41 possible-cases.

5.7 Report and follow-up

The last stage of the four-stage model is the publi-cation of a report. With the exception of the EUAthat ticks ‘the evaluated institutions’, the qualityassurance agencies themselves in practically allcases published the report. The Ministry of Educa-tion, Science and Culture, Division of Evaluationand Supervision in Iceland, and the National Coun-

Figure 10: Content of site visits combined with the method40

Which elements are included

in the site visit? Evaluation Accreditation Audit Total

Interviews 100% 95% 100% 97%

Classroom observations 20% 33% 40% 25%

Tour of the facilities 87% 91% 80% 86%

Final meeting with the management 73% 67% 80% 71%

Examination of documentary evidence 90% 90% 90% 90%

N= 30 21 5 31

40 As the respondents had the opportunity to tick several answersto each question, a vertical calculation will not add up to 100%.

Page 32: What is the Difference Between Criteria and Standards

32

ENQA Occasional Papers

cil for Academic Assessment and Accreditation inRomania, have ticked the ‘central government’ asjoint publisher with the quality assurance agency.

It is interesting that a report is not published in13% of all cases. Six agencies answered that a re-port is not published when they carried out theirprimary activity. Typically this activity is an accredi-tation activity (Akkreditierungsrat in Austria, StateCommission for Accreditation in Poland, Councilof Educational Evaluation-Accreditation in Cyprusand Accreditation Commission of the Czech Re-public), and in both Poland and the Czech Repub-lic they publish reports when they do evaluations(their secondary activity). The Fachhochschulrat inAustria, and the Conseil de Recteurs in Belgium donot publish reports when carrying out evaluationsof programmes.

Furthermore, there are examples of agencies, i.e.Akkreditierungsrat in Germany, that publishabridged editions of their reports. In general, thereare certain variations concerning the content of thepublished reports. Almost all reports (91% of thecases) contain conclusions, and a large majority(81% of the cases) also contain analyses. Only in30% of the cases do reports contain empirical evi-dence. In 89% of the cases, the reports include rec-ommendations, but unfortunately the study does notindicate anything about whom the recommendationsare aimed at. However, examining who is formallyresponsible for the follow-up on the recommenda-tions may give an indication of the intended recipi-ents of the reports, as all agencies replied that somekind of follow-up or action is taken.

In 39% of all cases the quality assurance agencyis responsible for the follow-up on the evaluations,in 46% of all cases the government (central or re-gional) is responsible, and in 76% of the cases theevaluated institution is responsible. The fact that

the figures add up to more than 100% is due to manyagencies having mentioned several responsibleagents. In some cases, including the Danish Evalu-ation Institute, this may be a result of reports con-taining recommendations at different levels, someaimed at the legislative and regulatory level, someaimed directly at the evaluated institutions. In othercases, one could presume that the division of re-sponsibility has to do with the interpretation of theword ‘formal’: The evaluated institutions are di-rectly responsible for following up on the recom-mendations, and the government is responsible fortaking subsequent action, i.e. an instance of lack ofinstitutional follow-up.

It is common practice to consult the evaluatedinstitutions before the reports are published, but onlyin 8% of all cases the government is consulted, whileother agents mentioned in the questionnaire, suchas the ‘rector’s conference’, ‘association of univer-sities’, professional organisations, and the labourmarket, are rarely consulted.

From an international point of view whenevertransparency is on the agenda, the language of thereports is of course of interest. In 76% of the casesthe national language is mainly used in the reportswhile in 35% of the cases English is mainly used.Apart from the agencies in English-speaking coun-tries and EUA, seven agencies write their reportsin English. They are: Estonian Higher EducationAccreditation Centre, Council of Educational Evalu-ation-Accreditation in Cyprus, Austrian Accredita-tion Council, Finnish Higher Education EvaluationCouncil, Ministry of Education, Science and Cul-ture, Division of Evaluation and Supervision in Ice-land, Higher Education Quality Evaluation Centrein Latvia, and the Netherlands Association of Uni-versities of Professional Education.

Page 33: What is the Difference Between Criteria and Standards

33

ENQA Occasional Papers

5.8 Summary

The variety in evaluation types used also causes adifferentiation in the methodological elements ap-plied and differences compared to 1998. For in-stance, there are examples of accreditation proce-dures, where self-evaluation does not take place,where external experts are not used, and where re-ports are not published. In general, however, thefour stages mentioned in the pilot projects and re-flected in the Council Recommendation are stillcommon features in European Quality Assurance.

All agencies use external experts. Most oftenthese are experts representing the area, and veryoften international experts are represented in theexpert panel, but probably these are from neigh-bouring countries or countries sharing the same na-tional language. In a few cases students are includedin the expert panel. In general the expert panels seemmore multifaceted in the EU/EFTA-countries thanin the associated countries.

The experts are typically appointed by the qual-ity assurance agency, but in 1/3 of all cases highereducation institutions have taken part in the nomi-nation of the experts. The experts have varying func-tions and responsibilities. Their core function, how-ever, seems to be site visits, and in half the casesthey also write the reports without the assistance ofthe agency. In another third of all cases they draftthe reports in co-operation with agency staff. Theagency seems more involved in the different func-tions of an evaluation process in the EU/EFTA-countries than in the associated countries.

Self-evaluation is included in 94% of the evalu-ations, but only in 68% of the accreditation proc-esses. Management and teaching staff are usually

part of the self-evaluation group, whereas gradu-ates rarely participate. The participation of admin-istrative staff and students vary considerably, andfor the latter there seems to be a connection to themethod used: Students are usually represented inconnection with evaluations, but rarely in connec-tion with accreditation. As documentary evidence,the self-evaluations are in almost all cases suppliedwith statistical data, and in about half the cases alsowith some kind of supplementary surveys.

With the exception of two cases site visits arepart of all evaluation processes in Europe. The av-erage length of the site visits is two days, but sitevisits in connection with audits typically last longer.The results of the survey demonstrate a mutualagreement on the elements constituting site visits:Almost every participating agency works with in-terviews, tours of the facilities, with final meetingswith the management, and the examination of docu-mentary evidence. The most controversial elementof the site visits seems to be classroom observa-tions, which are used in 25% of the cases.

Reports are published in almost all cases of evalu-ation, but sometimes skipped in connection withaccreditation. The reports typically contain conclu-sions and recommendations, and very often theyalso contain analysis, while empirical evidence isonly contained in 1/3 of all cases. It is commonpraxis to consult the evaluated institutions beforethe reports are published, whereas other agents arerarely consulted. In 3/4 of all cases the evaluatedinstitutions are also responsible for follow up onthe recommendations, while the quality assuranceagency and the government are responsible in a littleless than half of the cases. But all respondents agreethat follow-up is taken in one way or another.

Page 34: What is the Difference Between Criteria and Standards

34

ENQA Occasional Papers

According to the 1998 Status Report on Europeanquality assurance, quality was at that time as a ruleinterpreted in terms of the extent to which the indi-vidual programmes achieve their own goals and thelegal provisions under which they operate. Thisapproach is commonly referred to as the ‘fitnessfor purpose’ approach.

In accreditation, when it is decided whether aprogramme, institution, or other parameter meetscertain external standards, criteria and standardshave been a well-known feature and a regular ele-ment of the accreditation process. Interestingly, thesurvey results show that, not only in accreditationbut also in quality assurance in general, the use ofcriteria and standards has become a common ele-ment of the evaluation process. Almost all coun-tries are using criteria and standards in one form oranother in evaluation procedures.

6.1 Use of criteria andstandards in Europeanquality assurance

On the basis of the survey results it is obvious thatthe distinction between the term ‘criteria’ and ‘stand-ards’ is blurred. Nevertheless, the term ‘standards’seems to be used more in connection with accredi-tation, and the term ‘criteria’ seems to be more of-ten linked to evaluation. The standards in accredi-tation seem to be used as thresholds values, oftenformulated by government or other educationalauthorities.

The term ‘criteria’ seems to be of a more broadnature and in evaluation criteria imply a sort of ref-erence point in measuring the quality of education.Criteria are not fixed, but function as suggestionsor recommended points of reference for good qual-ity against which the subject, programme and insti-tution is evaluated.

Cases where ‘criteria’ are used as a synonym tostandards indicating minimum threshold, add to theconfusion. When criteria and standards are used inthe same evaluation, it is difficult to identify theinternal hierarchy between the terms in the surveydata.

Finally, there is a difference between explicitlyformulated criteria, which are written down andmade available, and implicit criteria of good prac-tice, which are often formulated through the guide-lines for self-evaluation by the agency, or by theexpert panel while writing of the report, but are notexplicitly set out in writing41.

The focus of the evaluation procedures seems tomark the line between the use of criteria or other-wise. All accrediting agencies use criteria or stand-ards. Most of the agencies that regularly conductevaluation of institutions and subjects, and institu-tional audits, use criteria, and so do 10 out of 14 ofthe agencies carrying out programme evaluation.Exceptions to this rule are 4 out of 10 of agenciesconducting programme evaluation, or the one thatconducted programme audits – they do not use cri-teria, but the ‘fitness for purpose’ approach.

In analysing the use of criteria by the 34 agen-cies it is interesting to investigate who set the crite-ria for good quality in education, that is, who for-mulated the criteria. It differs, both from one agencyto another, and depending on the type and focus ofevaluation, within an agency. Criteria can be for-mulated by an agency, by a government body, anexpert group, or a professional organisation, butcriteria are also often formulated jointly by diversestakeholders.

6 Criteria and standards

41 Unfortunately, we do not make this distinction in the survey,and may have added to the confusion. We have been loyal to theinformation on the use of criteria and standards supplied by theagencies. However, after analysing the responses, we are awarethat the nature of some of the criteria mentioned seems to be ofmore of an implicit than of an explicit nature.

Page 35: What is the Difference Between Criteria and Standards

35

ENQA Occasional Papers

6.2 Use of criteria andstandards in accreditation

There are long-standing traditions of accreditationand the use of pre-formulated standards in the as-sociated countries such as Bulgaria, Estonia, theCzech Republic, Hungary and Lithuania. Standardsfor the different fields are often formulated by mem-bers of an accreditation commission.

In Germany, general standards for the accredita-tion of new degree courses for bachelor’s and mas-ter’s degrees and continuing education courses havebeen formulated. These include basic guidelines,research and application orientation. General re-quirements have been laid down by the EducationMinisters’ Conference (KMK), the Conference ofPresidents and Rectors of Universities and OtherHigher Institutions (HRK), and the AccreditationCouncil (AR), and all evaluation agencies inGermany must follow these minimum standards.For example the following two threshold criteriaare used: More than 30% (at universities 50%) ofacademic staff must have an academic degree ofPhD, and more than 50% of the academic staff musthave full-time appointments. In addition, theregional evaluation agencies can apply furtherstandards within the context of these guidelines.ZEvA43 has, for example, developed further crite-

ria44.The Austrian accreditation standards are very

similar to the German standards. However, thestandards are applicable to the institutional level,which seems natural as the primary evaluation ac-tivity of the Austrian Accreditation Council (AAC)concerns the accreditation of institutions. An appli-cation must include proof that the minimum stand-ards concerning research staff, curricula design, andrange of study courses45 will be fulfilled.

In Southern Europe, the National institute forAccreditation of Teacher Education in Portugal(INAFOP) used legal regulation and standards setby the agency. The Standards in initial teacher edu-cation were approved by the agency in the year 2000and referred to the accreditation process of pro-

Figure 11: Frames of reference combined with the type of evaluation

44 E.g. a research-oriented bachelor’s degree must involve 50%teaching of basic scientific principles, 20% method and 30%knowledge on the subject, whereas the application-orientedBachelor’s degree must involve 30% basis basic scientificprinciples, 20% method and 50% knowledge on the subject.Other criterias could be that, the programmes must guarantee thatthe students obtain general key qualifications (competencies), thatthe degree courses with foreign orientation are internationallyrecognised, that the degree courses must be offered in a modularform etc.45 The standards constitute a set of criteria for assessing theextent to which the programmes meet the demands of teachingperformance and focus in the following areas: (i) The pro-gramme’s professional objectives, co-ordination and regulation,(ii) Collaborative and partnership efforts for developing theprogramme, (iii) Programme curriculum, (iv) Selection andevaluation of trainees and professional qualification, certifica-tion and (v) Teaching and non-teaching personnel and materials.43 Zentrale Evaluations- und Accreditierungsagentur Hannover.

Page 36: What is the Difference Between Criteria and Standards

36

ENQA Occasional Papers

grammes which gave access to professional quali-fications for teaching in basic education (includingpre-school education), and secondary education.The professional teaching profiles approved in 2001by the Government set the requirements for teach-ing performance, and were used as criteria to evalu-ate the adequacy of the goals of the teacher educa-tion programmes.

6.3 Use of criteria andstandards in other typesof evaluation

In programme evaluation, the use of criteria is be-coming a common feature. For example in the Neth-erlands, non-university education is evaluated bythe HBO-raad. A framework for assessment46 is setfor all quality assessments. The Board of the Asso-ciation determines the evaluation questions includedin the framework, and they are binding on the ex-pert-panel and the Universities of Professional Edu-cation for drawing up their self-evaluation reports.The framework for assessment is based on threeprincipal quality perspectives: the relation betweenthe study programme and the labour market, theeffectiveness and the efficiency of the study pro-gramme, and the conditions necessary for qualitymanagement and improvement. The framework forassessment consists of 24 evaluation questions di-vided over the three quality perspectives and corre-sponding criteria. The criteria are limited to the re-quirements to be met by the quality of the aspect inquestion. The results of the assessments are ratedon the scale of poor, insufficient, moderate, suffi-cient or good.

The Italian agency Comitato Nazionale per laValutazione del Sistema Universitario uses criteriain its primary evaluation activity; the institutionalaudit. Finally, the French Comité Nationale d’Éval-uation use in its institutional evaluation criteria for-mulated by the agency, but the criteria are adaptedto each evaluation.

In the United Kingdom academic review47 is thenew, integrated method of review that focuses onthe establishment, maintenance, and enhancementof quality and academic standards. It has been usedin Scotland since October 2000. Since January 2002,it is in use across the whole of the UK. It operatesover a six-year cycle, with each institution and allsubjects being reviewed once in each cycle. For eachsubject area in an institution, a judgement is madeabout academic standards48 . Further, for each sub-ject area reviewed in an institution, judgementsabout the quality of learning opportunities offeredto students are made against the broad aims of theprovision, and the intended learning outcomes ofthe programmes. Each of these three categories isjudged as either commendable, approved or fail-ing. Finally, an institutional review addresses theultimate responsibility for the management of qual-ity and standards that rests with the institution as awhole. This draws on the evidence of subject levelreviews, and uses points of reference provided bysections of the so-called Code of practice49.

46 The Basic framework for the assessment of higher profes-sional education as used by the Netherlands Association ofUniversities of Professional Education for its quality assess-ment procedure, November 1999.

47 Quality Assurance in UK Higher Education: a Brief Guide,2001.Quality Assurance Agency for Higher Education.48 Reviewers consider: 1) whether there are clear learningoutcomes that have been set appropriately in relation to thequalifications framework and any relevant subject benchmarkstatements; 2) whether the curriculum is designed to enable theintended outcomes to be achieved; 3) whether assessment iseffective in measuring achievement of the outcomes; and 4)whether student achievement matches the intended outcomesand the level of the qualification. In the light of this, reviewerswill state whether they have: a) confidence in standards, limitedconfidence in standards, or no confidence in standards.49 In the United Kingdom, clear reference points are made toexplain the basis on which judgements are made. The Agencyworks with the higher education sector to maintain a qualifica-tion framework, subject benchmarks, and the Code of Practice,and to provide guidance on programme specification. Thequalification framework sets out the attributes and the abilitiesthat can be expected of the holder of a qualification. The subjectbenchmark statements are about the conceptual framework thatgives a discipline its coherence and identity, and define whatcan be expected of a graduate in terms of the techniques andskills needed to develop understanding in the subject. Pro-gramme specifications are standard sets of information thateach institution provides about its programmes. Finally, theCode of Practice sets out guidelines on good practice in relationto the management of academic quality and standards. Eachsection of the Code of Practice has precepts or principles thatinstitutions should demonstrate, together with guidance on howthey may meet these precepts. The Code of Practice provides apoint of reference for use in the Agency’s reviews.

Page 37: What is the Difference Between Criteria and Standards

37

ENQA Occasional Papers

In Northern Europe, the ‘fitness for purpose’ ap-proach is still dominant in the evaluations. In useare implicit criteria of good practice, and aspectsthat must be highlighted in the guidelines, and atthe site visits. An exception is the Danish Evalua-tion Institute, which is developing an approachbased on a ‘fitness for purpose’ principle applyingimplicit criteria to more explicit criteria-basedevaluations. This has primarily been the case in theevaluation of master’s degree programmes of fur-ther education, also intended to be implemented inthe evaluation of non-university education. How-ever, as in other accrediting practices, it has been aregular component of the accreditation of non-uni-versity programmes; when a decision of approval,conditional approval, or non-approval has beenmade.

6.4 Summary

It is an interesting result of the survey that a newcharacteristic dimension is emerging as a commonfeature of European quality assurance practices,namely the use of criteria and standards.

In contrast to 1998 when the terms of referenceof the evaluation procedures were typically legalregulations and the stated goals of the evaluated

institutions, today almost all agencies use some kindof criteria. This, of course, is especially true foraccreditation procedures, where threshold criteriaor minimum standards are used in order to judgewhether thresholds are met. But this trend is alsoevident in other evaluation procedures when forinstance ‘good practice’ criteria are applied. In sev-eral countries, however, the criteria used are stillnot explicitly formulated.

The questionnaires and the attached material fromthe European quality assurance agencies argue thata number of features ought to be investigated fur-ther in relation to the use of criteria and standards:What is the difference between criteria and stand-ards? When does an agency work with thresholdcriteria, and when does it work with best practicecriteria? Is it important whether the criteria are ex-plicitly formulated or not? Who formulates the cri-teria? And to what extent do agencies work with apre-formulated set of criteria suitable for everyevaluation process?

There is no doubt that the use of standards andcriteria are relevant tools in connection with trans-parency – nationally, and internationally, but theessential question is of course the extent to whichthis promotes the continuous quality improvementof the higher education institutions.

Page 38: What is the Difference Between Criteria and Standards

38

ENQA Occasional Papers

Appendix A: Participating agencies and the useof different evaluation types

Name of Evaluation Institute Country Evaluation type Evaluation type

used regularly50 done occasionally

European University Association Institutional audit Theme evaluation

(EUA) Institutional evaluation Audit at theme level

Benchmarking of institutions

Fachhochschulrat (FH-Council) Austria Programme evaluation

Institutional evaluation

Accreditation of programme

Österreichischer Akkreditierungsrat Austria Accreditation of programme

(Austrian Accreditation Council, Accreditation of institution

AAC) Institutional audit

Vlaamse Interuniversitaire Raad Belgium Programme evaluation

Vlaamse Hogescholenraad Belgium Audit at programme level

Conseil des Recteurs (CRef) Belgium Programme evaluation

Other evaluation type51

National Evaluation and Bulgaria Programme evaluation

Accreditation Agency at the Institutional evaluation

Council of Ministers (NEAA) Accreditation of programme

Accreditation of institution

Council of Educational Evaluation- Cyprus Programme evaluation

Accreditation Accreditation of programme

Accreditation Commission of Czech Accreditation of programme Institutional evaluation

the Czech Republic republic

The Danish Evaluation Institute, Denmark Programme evaluation Institutional evaluation

university and non-university Accreditation of programme Theme evaluation

Other evaluation type

Estonian Higher Education Estonia Accreditation of programme

Accreditation Centre Other evaluation type52

Finnish Higher Education Finland Programme evaluation Accreditation of programme

Evaluation Council Institutional evaluation Accreditation of institution

Theme evaluation Benchmarking of subjects

Institutional audit Benchmarking of programmes

Benchmarking of institutions

Benchmarking of themes

Comité nationale d’évalution France Institutional evaluation

Akkreditierungsrat Germany Accreditation of programme

Accreditation of institution

50 As the answers below were based on a questionnaire, it was difficult to interpret, how the term ‘regularly’ has been perceived bythe respondents. As the size of the higher education sector and that of the agencies in member countries varies, a set of numbersindicates that regular praxis vary correspondingly.51 Transversal evaluation of a programme trough all concerned universities (agency’s own text).52 Research evaluation.

Page 39: What is the Difference Between Criteria and Standards

39

ENQA Occasional Papers

Zentrale Evaluations- und Germany, Subject evaluation

Akkreditierungsagentur Nieder- Accreditation of programme

Hannover (ZEvA) sachsen

Evaluation Agency of Germany, Germany, Subject evaluation Programme evaluation

Baden-Württemberg Baden-

Württemberg

Hungarian Accreditation Hungary Accreditation of programme

Committee Accreditation of institution

Ministry of Education, Science Iceland Programme evaluation

and Culture. Division of Evaluation

and Supervision

Higher Education and Training Ireland Programme evaluation Theme evaluation

Awards Council (HETAC) Institutional evaluation Benchmarking of subjects

Accreditation of programme Benchmarking of programmes

Accreditation of institution

Institutional audit

Higher Education Authority Ireland Institutional audit

National Qualification Authority of Ireland Institutional audit

Ireland

Comitato Nazionale per la Italy Programme evaluation Audit at programme level

Valutazione del Sistema Institutional evaluation Benchmarking of institutions

Universitario (CNSVU) Accreditation of institution Evaluation of theme

Institutional audit

Higher Education Quality Latvia Programme evaluation Theme evaluation

Evaluation Centre Institutional evaluation

Accreditation of programme

Accreditation of institution

Lithuanian Centre for Quality Lithuania Programme evaluation Subject evaluation

Assessment in Higher Education Accreditation of programme Institutional evaluation

Benchmarking of programmes Theme evaluation

Accreditation of institution

Audit at subject level

Audit at programme level

Institutional audit

Inspectorate of Higher Education The Nether- Programme evaluation Institutional evaluation

lands Theme evaluation Institutional audit

Benchmarking of subjects Other evaluation type53

Benchmarking of institutions

Benchmarking of themes

Benchmarking of programmes

VSNU Department of Quality The Nether- Subject evaluation

Assurance lands Programme evaluation

Benchmarking of subjects

Benchmarking of programmes

Other evaluation type54

Netherlands Association of The Nether- Programme evaluation

Universities of Professional lands Accreditation of programme

Education (HBO-raad) Benchmarking of programmes

53 Evaluation or auditing is a licensing procedure (agency’s own text).54 Evaluation of research (agency’s own text).

Page 40: What is the Difference Between Criteria and Standards

40

ENQA Occasional Papers

Network Norway Council Norway Accreditation of programme Programme evaluation

Institutional evaluation

Institutional audit

State Commission for Accreditation Poland Programme evaluation Audit at programme level

Accreditation of programme Institutional audit

National Institute for Accreditation Portugal Accreditation of programme

of Teacher Education (INAFOP)

National Council for Academic Romania Accreditation of programme

Assessment and Accreditation Accreditation of institution

Audit at programme level

Institutional audit

Benchmarking of subjects

Benchmarking of programmes

Agéncia per a la Qualitat del Spain, Programme evaluation Benchmarking of themes

Sistema Universitári a Catalunya Cataluna Evaluation of theme Evaluation of themes

National Agency for Higher Sweden Subject evaluation Theme evaluation

Education (Högskoleverket) Programme evaluation Benchmarking of themes

Accreditation of subject

Accreditation of programme

Institutional audit

Other evaluation type55

The Quality Assurance Agency United Subject evaluation Benchmarking of subjects

for Higher Education (QAA) Kingdom Programme evaluation

Institutional audit

55 Examensrättprövning (agency’s own text).

Page 41: What is the Difference Between Criteria and Standards

41

ENQA Occasional Papers

Appendix B: Definitions

The following suggested definitions are sent together with the questionnaire to the participating agencies.The complete questionnaire can be obtained upon request from the ENQA Secretariat at: [email protected].

An evaluation56 could be one of the following:

• An evaluation of a subject57 , which focuses on the quality of one specific subject, typically in all the

programmes in which this subject is taught.

• An evaluation of a programme, which focuses on the activities within a study programme, which in

this context is defined as studies leading to a formal degree.

• An evaluation of an institution, which examines the quality of all activities within an institution, i.e.

organisation, financial matters, management, facilities, teaching and research.

• An evaluation of a theme which examines the quality or practice of a specific theme within education

e.g. ICT or student counselling.

• An audit, which is an evaluation of the strengths and weaknesses of the quality mechanisms established

by an institution itself to continuously monitor and improve the activities and services of either a subject,

a programme, the whole institution or a theme.

• An accreditation process, which builds on the same methodological elements as the other types of

evaluation, but differs from the other procedures in that judgement is provided according to predefined

standards to decide whether a given subject, programme, institution or theme meets the necessary level.

• Benchmarking, which is a comparison of results between subjects, programmes, institutions or themes

leading to an exchange of experiences of best practice.

• Criteria are seen as checkpoints and benchmarks for assessing the quality of the input and the process

Standards are seen as the expected outcomes of the educational training. For example standards

defined by professional organisation or legislation. It concerns the competencies that are expected from

the graduates

56 In this survey, the term ‘evaluation’ also covers the terms ‘assessment’ and ‘review’.57 The term subject refers for example to ‘chemistry’ within the study programme of medicine.