course experience questionnaire 2000

62
Course Experience Questionnaire 2000 A report prepared for the Graduate Careers Council of Australia John Ainley The Australian Council for Educational Research

Upload: others

Post on 21-Jan-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Course Experience Questionnaire 2000

Course ExperienceQuestionnaire

2000

A report prepared forthe Graduate Careers Council of Australia

John AinleyThe Australian Council for Educational Research

Page 2: Course Experience Questionnaire 2000

ii

Acknowledgements

From ACER

The Australian Council for Educational Research (ACER) undertook the analyses ofthe Course Experience Questionnaire on behalf of the Graduate Careers Council ofAustralia (GCCA). The contributions of Dr Trevor Johnson (for conducting many ofthe analyses for the report) and Dr Gerald Elsworth (for conducting the analyses ofreliability and structure in Chapter 6) are gratefully acknowledged. Thanks are alsodue to people associated with the GCCA for their assistance in the analyses and theconduct of the CEQ 2000 survey and previous surveys: Roger Bartley (ProjectDirector, GCCA), Professor Michael Koder AM (Convenor of the Survey ReferenceGroup), Bruce Guthrie (Research Manager, GCCA), members of the SurveyReference Group and Pham Ho (University of Melbourne). Assistance provided bystaff from the Higher Education Division of the Department of Education, Trainingand Youth Affairs (DETYA) is also acknowledged.

From GCCA

The Graduate Careers Council of Australia conducted the Course ExperienceQuestionnaire survey. They wish to acknowledge the role of participating graduatesand universities and in particular the survey managers and Careers Service staff whocollected the data on which this report is based. The GCCA also wishes to thankDETYA for funding the Course Experience Questionnaire and ACER for undertakingthe analysis.

© 2001 Graduate Careers Council of Australia Ltd.

All rights reserved. No part of this publication may be copied or reproduced, stored ina retrieval system or transmitted in any form or by any means electronic, mechanical,photocopy, recording or otherwise without the prior written permission of thepublishers.

Published by the Graduate Careers Council of Australia Ltd.PO Box 28, Parkville, Victoria 3052GCCA Switchboard: 03 8344 9333Gradlink Helpdesk: 03 9349 4300Facsimile: 03 9347 7298Email: [email protected]: www.gradlink.edu.au

.

Page 3: Course Experience Questionnaire 2000

iii

Contents

Executive Summary...........................................................................................................vii

Introduction ........................................................................................................................1Background .....................................................................................................................1Data .................................................................................................................................2Issues in the Interpretation of CEQ Data ........................................................................4

Comprehensiveness ....................................................................................................4Within-Course Variability ..........................................................................................4Graduate Respondents ................................................................................................4Response Scale ...........................................................................................................5Response Rates ...........................................................................................................5

Summary .........................................................................................................................5

Patterns and Trends .............................................................................................................6Responses to CEQ Items.................................................................................................6Groups of Items or Scales ...............................................................................................9Trends ...........................................................................................................................11Summary .......................................................................................................................15

Graduate and Course Characteristics ................................................................................16Characteristics of graduates ..........................................................................................16Fields of Study ..............................................................................................................19Universities ...................................................................................................................21

The Good Teaching Scale.........................................................................................21The Overall Satisfaction item...................................................................................23Mean Percentage Agreement Scores For Universities .............................................25

Summary .......................................................................................................................27

Using the CEQ as Stimulus to Course Improvement ........................................................28The CEQ and Other Aspects of Learning .....................................................................28

Approaches to Learning............................................................................................28Course Orientation....................................................................................................29Innovative Practice ...................................................................................................29

Institutional Patterns for Initial Primary Teacher Education ........................................29Good Teaching Scores from CEQ 2000 ...................................................................30Good Teaching Scores Over Three Years ................................................................31

Responding to CEQ Data..............................................................................................32

Properties of the CEQ .......................................................................................................34

Page 4: Course Experience Questionnaire 2000

iv

The Scales .....................................................................................................................34Reliabilities of the Scales..............................................................................................34Structure of the CEQ.....................................................................................................35

Exploratory Factor Analysis.....................................................................................36Confirmatory Factor Analysis ..................................................................................37Possible Modifications to the CEQ Scales ...............................................................39An Alternative Approach..........................................................................................40

Summary .......................................................................................................................40

References ......................................................................................................................41

Appendix A: The Course Experience Questionnaire ........................................................43Appendix B: The AVCC Code of Practice .......................................................................45Appendix C: Response Rates of Institutions Participating in GDS 2000 .........................51Appendix D: Comparison of Characteristics of CEQ 2000 Respondents and

the Population of Bachelor Degree Graduates from 1999............................52

Page 5: Course Experience Questionnaire 2000

v

Tables

Table 2.1 CEQ 2000 Item Response Percentages: Bachelor DegreeGraduates ........................................................................................................7

Table 2.2 Scale Scores by Level of Course: CEQ 2000 ...............................................10Table 3.1a Percentage Agreement with the Good Teaching Scale and the

Overall Satisfaction Item by Selected Graduate and CourseCharacteristics: Bachelor Degree Graduates, 2000 ......................................17

Table 3.1b Percentage Agreement with the Good Teaching Scale and theOverall Satisfaction Item by Selected Graduate and CourseCharacteristics: Bachelor Graduates, 2000...................................................18

Table 3.2 Percentage of Variance in the Good Teaching Scale Explainedby Selected Graduate and Course Characteristics: Ten SpecificFields of Study, Bachelor Graduates, CEQ 2000 .........................................22

Table 3.3 Percentage of Variance in the Overall Satisfaction ItemExplained by Selected Graduate and Course Characteristics:Ten Minor Fields of Study, Bachelor Graduates, CEQ 2000.......................24

Table 5.1 Reliability of the CEQ Scales: Bachelor Degree Graduates.........................34Table 5.2 Factor Loadings derived from Analysis of CEQ Items: Bachelor

Degree Graduates..........................................................................................36Table 5.3 Confirmatory Factor Analyses, Bachelor Degree Graduates,

CEQ 2000 .....................................................................................................38

Page 6: Course Experience Questionnaire 2000

vi

Figures

Figure 1 Trends in CEQ Indicators 1993-2000..........................................................viiiFigure 2.1 Institutional Response Rates to the Graduate Destination Survey .................2Figure 2.2 Bachelor Degree Respondent Numbers for the CEQ 1993-2000 ...................3Figure 2.1 Percentage Agreement with CEQ 2000 Items ................................................8Figure 2.3a Percentage Agreement with Items in the Good Teaching Scale:

Bachelor Graduates, 1993-2000 ..................................................................12Figure 2.3b Percentage Agreement with Items in the Clear Goals and

Standards Scale: Bachelor Graduates, 1993-2000 ......................................12Figure 2.3c Percentage Agreement with Items in the Appropriate Assessment

Scale: Bachelor Graduates, 1993-2000.........................................................13Figure 2.3d Percentage Agreement with Items in the Appropriate Workload

Scale: Bachelor Graduates, 1993-2000.........................................................13Figure 2.3e Percentage Agreement with Items in the Generic Skills Scale:

Bachelor Graduates, 1993-2000 ...................................................................14Figure 2.3f Percentage Agreement with the Overall Satisfaction Item:

Bachelor Graduates, 1993-2000 ...................................................................14Figure 3.1 Percentage Agreement with the Good Teaching Scale by

Selected Fields of Study: Bachelor Graduates, CEQ 2000 ..........................20Figure 3.2 Percentage Agreement with the Overall Satisfaction Item by

Selected Fields of Study: Bachelor Graduates, CEQ 2000 ..........................21Figure 3.3 Mean Percentage Agreement with the Good Teaching Scale by

University: Bachelor Psychology Graduates, CEQ 2000 .............................25Figure 3.4 Mean Percentage Agreement with the Overall Satisfaction Item

by University: Bachelor Psychology Graduates, CEQ 2000 ........................26Figure 3.5 Mean Percentage Agreement with the Good Teaching Scale by

University: Bachelor Graduates in Initial Primary TeacherEducation, CEQ 2000 ...................................................................................26

Figure 3.6 Mean Percentage Agreement with the Good Teaching Scale byUniversity: Bachelor History Graduates, CEQ 2000....................................27

Figure 4.1 Percentage Agreement with the Good Teaching Scale for Initialprimary Teacher Education Graduates: CEQ 2000 ......................................31

Figure 4.2 Mean Percentage Agreements for the Good Teaching Scale: InitialPrimary Teacher Education CEQ 1998, CEQ 1999, CEQ 2000 ..................32

Page 7: Course Experience Questionnaire 2000

vii

Executive Summary

This report describes the views of graduates from Australian universities regarding thecourses that they completed. It focuses specifically on graduates who completed theircourses of study in 1999 but also references previous cohorts of graduates. The dataon which the report is based are taken from the Course Experience Questionnaire(CEQ) that was administered during the year 2000 as part of 2000 GraduateDestination Survey.

Each year since 1993, and approximately four months after they have completed acourse of study, all graduates of Australian universities are invited to respond to the25-item Course Experience Questionnaire (CEQ). In that questionnaire graduates areable to express their degree of agreement or disagreement on a five-point scale with24 statements about five facets of their courses:

· the quality of teaching;

· the clarity of goals and standards;

· the nature of the assessment;

· the level of the workload; and

· the enhancement of their generic skills.

A final item asked graduates to indicate their overall level of satisfaction with thecourse on the same five-point scale.

This report focuses on the responses by bachelor graduates - those students who haverecently completed pass bachelor degrees, honours bachelor degrees or three-yearundergraduate diplomas - to the six items that form the Good Teaching Scale as wellas the Overall Satisfaction item. In the 2000 CEQ there were 50,455 bachelor degreerespondents. These respondents had similar characteristics (in terms of gender, age,field of study, country of residence and nature of qualifications) to the population ofgraduates who completed a course in 1999.

Nationally 68 per cent of these bachelor degree graduates expressed agreement(combining the percentages in the two top categories of a five-point scale) with thestatement Overall, I was satisfied with the quality of this course. There has been asmall but steady increase in this level of agreement since 1993 when the percentageagreement was 62 per cent a shown in Figure 1. A measure called �broad satisfaction�is sometimes used to refer to the overall percentage in the top three responsecategories. In 2000, 90 per cent of bachelor degree graduates were �broadly satisfied�with the overall quality of their courses, up from the 86 per cent recorded in 1993.

The Good Teaching Scale consists of six items on which the average agreement was43 per cent. Agreement ranged from 50 per cent for the item the teaching staff workedhard to make their subjects interesting to 34 per cent for the item the staff put a lot oftime into commenting on my work. As for the Overall Satisfaction item, there has beena trend for levels of agreement on the Good Teaching Scale to increase over time from37 to 43 per cent. The level of �broad satisfaction� with good teaching in 2000 was 77per cent, and has shown an increase from the 72 per cent recorded in 1995.

Page 8: Course Experience Questionnaire 2000

viii

Figure 1 Trends in CEQ Indicators 1993-2000

Good Teaching

0

10

20

30

40

50

60

70

80

90

100

93 94 95 96 97 98 99 00Year

Perc

enta

ge

% Broad Agreement % Agreement

Clear Goals & Standards

0

10

20

30

40

50

60

70

80

90

100

93 94 95 96 97 98 99 00Year

Perc

enta

ge% Broad Agreement % Agreement

Appropriate Workload

0

10

20

30

40

50

60

70

80

90

100

93 94 95 96 97 98 99 00Year

Perc

enta

ge

% Broad Agreement % Agreement

Appropriate Assessment

0

10

20

30

40

50

60

70

80

90

100

93 94 95 96 97 98 99 00Year

Perc

enta

ge

% Broad Agreement % Agreement

Generic Skills

0

10

20

30

40

50

60

70

80

90

100

93 94 95 96 97 98 99 00Year

Perc

enta

ge

% Broad Agreement % Agreement

Overall Satisfaction

0

10

20

30

40

50

60

70

80

90

100

93 94 95 96 97 98 99 00Year

Perc

enta

ge

% Broad Agreement % Agreement

Page 9: Course Experience Questionnaire 2000

ix

The trends for the Clear Goals and Standards and Generic Skills scales are similar to thosefor the Good Teaching scale. Over the period from 1993 to 2000 the mean percentageagreement on the Clear Goals and Standards scale has risen from 44 to 51 per cent and thelevel of �broad satisfaction� has increased from 77 to 82 per cent. Similarly, for the GenericSkills scale there has been an increase (but a smaller increase) in mean percentage agreementfrom 59 to 63 per cent and an increase in broad satisfaction from 84 to 87 per cent.

On the Appropriate Assessment scale the trend has been in the other direction. From 1993 to2001 the mean percentage agreement declined from 63 to 57 per cent and �broad satisfaction�declined to a smaller extent from 87 to 84 per cent. It could be inferred from this trend thatover the period under consideration there has been a shift in assessment in higher educationtowards factual content and knowledge rather than thinking skills.

There has been very little change, between 1993 and 2000, in the mean percentage agreementon the Appropriate Workload scale and just a small shift (upwards) in the �broad satisfaction�measure for that scale.

Relatively little of the variation in responses to the Overall Satisfaction item or the GoodTeaching Scale can be attributed to the characteristics of graduates: their sex, ethnicbackground, mode of attendance or employment status. Some evidence exists of an influenceof age with graduates older than 40 years expressing greater satisfaction than youngergraduates. There is substantial variation in the responses of graduates who were enrolled inthe different fields of study such as accounting, biology, nursing and so on. Further, withinfields of study there are sometimes substantial differences among the responses of graduatesfrom different universities. These differences are larger for the Good Teaching Scale than forthe Overall Satisfaction item. For both measures, however, the between university differencesare markedly larger than any differences attributable to the characteristics of graduates.

Page 10: Course Experience Questionnaire 2000
Page 11: Course Experience Questionnaire 2000

Introduction

1

Introduction

This report presents results concerning the views of the 1999 graduates from Australianuniversities about their experience of the courses from which they graduated. Informationabout the views of those graduates is derived from the Course Experience Questionnaire(CEQ) that forms part of Graduate Destination Survey (GDS) conducted in the year 2000.The GDS is mailed some four months after graduates have completed their course. The CEQhas been included with the GDS since 1993 (Ainley & Long, 1994). It asks graduates to ratetheir agreement with each of 25 items. Items cover the quality of teaching, the clarity of thegoals and standards, the level of the workload, the nature of the assessment and the extent towhich generic skills are embedded in the course. There is also a single item that measuresoverall satisfaction with the course. The Questionnaire is included in this report as AppendixA.

Background

The purpose of the CEQ is to assemble data about graduates� perceptions of the quality of thecourses that they completed in the previous year. Matters relating to the quality of teaching,academic standards, assessment methods and the volume of work expected from students areof vital interest to administrators, teaching staff and students. During course appraisals, forexample, these matters are reviewed, and graduates� perceptions of their course experiencescan represent a valuable supplement to the range of information considered.

Students� views have long been recognised as relevant to the evaluation of courses. Ramsdenand Entwistle (1981) examined the effects of academic departments on students� approachesto studying. Marsh and Overall (1981) reported on the relative influences of course level,course type and instructor on students� evaluations of college teaching in the United States.Marsh (1987) reviewed research in the area, discussed methodological issues and proposeddirections for future research. In a longitudinal study of the stability of student ratings of thesame teachers over a 13-year period, Marsh and Hocevar (1991) found little change over timewith respect to nine content specific dimensions, an overall course rating, and an overallteacher rating. More recently, Marsh and Roche (1994) reported on the use of students'evaluations of university teaching to improve teaching effectiveness. Many universitiesrequire that student views are surveyed and, in some universities, there are now formalprocesses by which student assessment of the quality of instruction is included in review andappraisal processes for staff.

The CEQ focuses on graduates� perceptions of their courses rather than on students�evaluations of particular instructors. As such, it is a step towards providing universities withadditional system-wide information that can be used to make informed judgements aboutaspects of the courses that they provide. Interest in the development of an instrument like theCEQ was stimulated by observations about the absence of systematic information about thequality of teaching universities in several discipline reviews. Increased levels of participation,in universities focussed attention on the effective use of resources. In 1987 a governmentdiscussion paper and policy statement argued the need for greater accountability in highereducation. Use of the CEQ followed from the recommendations of the Performance IndicatorsResearch Group (Linke, 1991) that was commissioned by the Commonwealth Government to

Page 12: Course Experience Questionnaire 2000

The 2000 CEQ Report

2

examine indicators of relative performance in higher education at both system andinstitutional levels.

0

2

4

6

8

10

12

14

16

18

26-35% 36-45% 46-55% 56-65% 66-75% >75%

Response Rate

Num

ber o

f Ins

titut

ions

Aust ResAll

Figure 2.1 Institutional Response Rates to the Graduate Destination Survey

The development of sound indicators is a cumulative process. The basic form of the CEQ wasused in studies of undergraduate students in the United Kingdom (Ramsden & Entwistle,1981; Entwistle & Ramsden, 1983). Ramsden and colleagues tested a later version inAustralian universities during 1989 (Ramsden, Martin & Bowden, 1989; Ramsden, 1991a;1991b). Wilson, Lizzio and Ramsden (1996) reported on the validity and usefulness of theCEQ as a performance indicator of the perceived quality of university teaching. The SurveyReference Group (SMG) of the GCCA added a new set of items for the national survey.Generic Skills items were added in the earliest of the national surveys (conducted in 1993) inresponse to an interest in the broader skills (beyond the discipline specific skills andknowledge) that are developed through university study (NBEET, 1992). That issue endures tothe present day with the emergence of a Graduate Skills Assessment for use in Australianuniversities. In addition there has been an attempt to extend the Appropriate Assessment scaleby changing item 16 but without success.

Data

The overall response rate to the GDS was 58.0 per cent (survey questionnaires containingwere mailed to 156,273 graduates and 90,585 were returned). Among Australian permanentresidents the response rate was slightly higher at 61.2 per cent (80,462 questionnaires werereturned out of 131,533 that had been mailed). These overall response rates are approximatelyfive percentage points lower than those obtained in the 1999 survey. As shown in Figure 1.1institutional response rates ranged from a high of 74.1 per cent to a low of 28.2 per cent.

Page 13: Course Experience Questionnaire 2000

Introduction

3

It is important to note that the CEQ is not based on data from a sample of institutions or adesigned sample of graduates within institutions. It is based on data from all universities andfrom an average of about 60 per cent of the graduates from those universities. At neither ofthese levels should those data be analysed as samples from an infinite population.

Some 74,611 respondents across all levels of qualification provided information about theirfirst major and answered at least one CEQ item1. This report focuses on the views of a subsetof the CEQ respondents � the bachelor degree graduates and those students who have recentlycompleted pass bachelor degrees, honours bachelor degrees or three-year undergraduatediplomas. There were 50,455 bachelor degree respondents to the CEQ and 8,155 of theseprovided additional CEQ information about a second major. The number of bachelor degreeresponses to the Course Experience Questionnaire ranged from 58,386 for item 2 to 56,645for item 21. In practice the effective number of bachelor degree responses differs for differentscales on the questionnaire.

The percentage of bachelor degree respondents to the GDS who failed to respond to the CEQranged from 2.8 per cent in Veterinary Science up to 13.9 per cent in Law.

As shown in Figure 1.2 there has been a decline in the numbers of respondents to the CEQsince 1996. Changes in the number of respondents reflect both changes in the number ofgraduates and changes in response rates. There is evidence of a general but uneven decline inresponse rates since 1996.

QOORR

RRUTV

SPMQOSSPRQ

SNVOQSMNQR

RTSVU

RMQRR

M

NMMMM

OMMMM

PMMMM

QMMMM

RMMMM

SMMMM

TMMMM

NVVP NVVQ NVVR NVVS NVVT NVVU NVVV OMMM

vÉ~ê=çÑ=pìêîÉó

oÉë

éçå

ÇÉå

í=kì

ãÄ

Éêë

Figure 2.2 Bachelor Degree Respondent Numbers for the CEQ 1993-2000

Respondents to the CEQ 2000 survey had similar characteristics to the population ofgraduates who completed a course in 1999. Details are contained in Appendix D. CEQ

1 The CEQ enables graduates completing double majors to register their opinions of both courses.

Page 14: Course Experience Questionnaire 2000

The 2000 CEQ Report

4

respondents contained a slightly higher proportion of females than the full cohort of coursecompletions (62% compared with 58%) and a slightly higher proportion of graduates aged 25years or older (37% compared with 31%) and a higher percentage of Australian residents(92% compared with 84%). Representation of fields of study among CEQ respondents wassimilar to that in the population of course completions from 1999 (but with very slight under-representation of science and engineering and very slight over-representation of humanitiesand social sciences).

Issues in the Interpretation of CEQ Data

There are caveats attached to the interpretation of CEQ results relating to comprehensiveness,within-course variation, respondent characteristics, response scales and response rates.

Comprehensiveness

There are other dimensions on which students could evaluate their courses. Some of thesedimensions might reflect objectives developed for particular courses at individual institutions,some might be specific to particular fields of study and some might be based on other generalcharacteristics of teaching and learning. Current modifications are intended to extend thedomains that are covered. The CEQ focuses on parameters that are central to teaching andlearning in most fields of study within universities. It seeks information about these commondimensions, and it provides a basis for comparisons within fields of study within institutions.Nonetheless, the CEQ scale scores are relative indicators and informed judgements mustalways incorporate relevant local knowledge.

Within-Course Variability

Graduates� experience of courses may vary within any course of study. Consequently, for theentire course, graduates may find it difficult to condense their experiences into the singleresponse required for each item. In addition, if the results are averages of experience there isthe real possibility that the items will fail to discriminate between courses. Resultssummarised in the current and previous reports suggest that this does not appear to be aproblem.

Graduate Respondents

The CEQ is only mailed to students who have successfully completed a course of study at aninstitution of higher education. Thus, students who do not graduate are excluded. It may beargued that graduate students are better placed to evaluate a course than those who have notgraduated. Nevertheless, there is a possibility that the CEQ scale scores are biased towardsmore favourable assessments by the exclusion of students who do not complete the course.While many decisions to withdraw may be based on factors unrelated to the course, it wouldbe surprising if there were no correlation between course experiences and the decision todiscontinue. A somewhat different proposition is the claim that student evaluations aresuspect because students are not in a position to correctly evaluate a course until they haveeither graduated or applied their knowledge in the workplace. Eley and Thomson (1993)report that ratings by past and present students correlate highly.

Page 15: Course Experience Questionnaire 2000

Introduction

5

Response Scale

When responding graduates are required to circle the number 1, 2, 3, 4, or 5 next to each itemwhere �1� represented strong disagreement and �5� was associated with strong agreement (orin some forms tick an box corresponding to the response). It is assumed that respondentswould consider the intervening values of 2, 3 and 4 part of the five-point scale ranging fromstrong disagreement to strong agreement. This type of scale provides a common basis forresponses to items concerned with different aspects of course experience. Analyses by Longand Hillman (2000) have shown consistent and well-spaced thresholds for these categories onall of the items (including the middle category) indicating that graduates interpret them asintended. An alternative suggested by Treloar (1994) would be to phrase the items morespecifically, and to tailor responses appropriate to each item. This approach would increasethe specificity of interpretation but it would reduce the possibility of comparisons acrossitems. A response scale based on frequency of occurrence (e.g. from �none of the time� to �allof the time�) would be an alternative (Sheridan, 1995).

Response Rates

Acknowledgement of the possible effects of partial response is appropriate. The general issueis whether those who did not respond to the survey might have answered differently fromthose who did. One important aspect of the survey was the differential non-response betweenfields of study and between institutions. A small-scale investigation of non-respondents to the1996 CEQ found that they did not differ greatly from respondents at a macro level, such asfield of study, but there were discrepancies between the two groups of graduates in terms ofsex and age group (Guthrie & Johnson, 1997). Long and Hillman (1999) examined the effectof non-response on CEQ scores more recently and concluded that the effect is small.

Summary

The CEQ has been used in development, evaluation and research extending over 20 years andhas been used in annual national surveys of Australian graduates since 1993. Over that time ithas proved to have a stable and reliable structure and to discriminate between differentlearning environments. The CEQ 2000 survey provided data from nearly 75,000 graduates ofwhom just over 50,000 were Bachelor degree graduates (who provided information aboutnearly 58,400 different courses). The number of respondents to the survey has declined a littlesince 1997 and this appears to be partly due to declining response rates. Although the responserate is a little better than many comparable surveys a high response rate is crucial to anysurvey and this aspect of the CEQ may need attention. The response rate to the survey in 2000was approximately 60 per cent.

The report is organised around a series of chapters. The next chapter examines patterns andtrends in CEQ data at a national level. Chapter 3 examines the influence of graduate andcourse characteristics on CEQ responses. It concludes that, of the graduate characteristicsexamined, only age has an influence on graduate perceptions of their course but that there arenoteworthy differences among institutions within fields of study. Chapter 4 provides adiscussion of the important issues of the use of the CEQ to stimulate course improvement.Chapter 5 discusses some statistical properties that reflect the reliability of CEQ scales and thestructure of the instrument.

Page 16: Course Experience Questionnaire 2000

6

Patterns and Trends

As in previous years graduates responding to the 2000 CEQ were asked to record responses toeach item on a five-point scale ranging from strongly disagree to strongly agree. From theseresponses a variety of summary statistics can be generated to indicate graduates� views of theircourse experiences at university. This report focuses on those respondents who completedpass bachelor degrees, honours bachelor degrees or three-year undergraduate diplomas -- agroup collectively referred to as bachelor graduates. In the report most attention is given tothose items that are indicators of good teaching and overall satisfaction. However, thedistribution of responses to all the items by bachelor graduates who completed their course in1999 is presented in Table 2.12.

Responses to CEQ Items

Table 2.1 contains the wording of each of the items on the questionnaire, together with thepercentages of bachelor degree graduates responding to each category. As an example,consider the most general item: Overall, I was satisfied with the quality of this course. Thedata indicate that 3 per cent of bachelor degree graduates strongly disagreed with thisstatement and 22 per cent strongly agreed with it. The percentages responding with theintervening categories from the disagreement (lower value) to the agreement (higher value)end of the scale were 7, 22 and 47 per cent respectively. Although the intervening responsepoints were not labelled on the questionnaire it is reasonable to interpret them as disagree,uncertain (neither disagree nor agree) and agree (Long & Hillman, 2000).

Two summary statistics for items are recorded in Table 2.1. The first is the percentageagreement. By combining the two agreement categories, it can be concluded that 69 per centof bachelor degree graduates agreed with the expression of overall satisfaction with the qualityof their course. Percentage agreement values for each of the items are illustrated in Figure 2.2.A measure called �percentage broad agreement� is sometimes used to refer to the overallpercentage in the top three categories.

The second is the mean score. Item means are calculated after recoding the responses 1, 2, 3,4 and 5 to -100, -50, 0, 50 and 100 respectively. Where the wording of an item had a senseopposite to the meaning of the scale (items 4, 8, 12, 13, 19, 21 and 23) the scoring wasreversed. Percentage agreement is more easily understood than the mean, is equally useful inmonitoring change and can be directly compared across scales. On the other hand the meansincorporate information from all response categories.

2 Graduates completing a double major recorded their opinions about both courses of study and 8,155 of the

50,455 bachelor degree respondents (16.2 per cent) provided two sets of course experiences. Analyses ofopinions about courses treat those additional responses from the various institutions and fields of study as partof the descriptive analysis. Additional responses are not included in analyses of background data (such asgender, non-English speaking background, and state of origin) or the structure of the questionnaire.

Page 17: Course Experience Questionnaire 2000

Patterns and Trends

7

Table 2.1 CEQ 2000 Item Response Percentages: Bachelor Degree Graduates

Responses in each Category (%)Stronglydisagree

to Stronglyagree %

No. CEQ Scale/Item 1 2 3 4 5 Agree M SD

Good Teaching3. The teaching staff of this course motivated me to do my best

work.4 14 33 34 14 48 20 51

7. The staff put a lot of time into commenting on my work. 9 24 33 25 9 34 0 5515. The staff made a real effort to understand difficulties I might

be having with my work7 18 35 29 11 40 10 53

17. The teaching staff normally gave me helpful feedback on howI was going.

6 18 32 35 10 45 13 52

18. My lecturers were extremely good at explaining things. 4 15 38 33 10 43 15 4920. The teaching staff worked hard to make their subjects

interesting.4 13 33 38 12 50 21 50

Clear Goals & Standards1. It was always easy to know the standard of work expected 3 14 31 41 12 53 22 486. I usually had a clear idea of where I was going and what was

expected of me in this course.3 13 27 43 14 57 26 50

13.r It was often hard to discover what was expected of me in thiscourse.

4 15 32 37 12 49 19 50

24. The staff made it clear right from the start what they expectedfrom students.

4 15 35 35 11 46 17 49

Appropriate Workload4.r The workload was too heavy. 5 16 39 31 8 39 11 4914. I was generally given enough time to understand the things I

had to learn.3 14 33 41 9 50 19 47

21.r There was a lot of pressure on me to do well in this course. 11 26 34 22 7 29 -6 5523.r The sheer volume of work to be got through in this course

meant it couldn�t all be thoroughly comprehended.10 22 32 26 9 35 0 56

Appropriate Assessment8.r To do well in this course all you really needed was a good

memory.6 14 20 32 28 60 32 59

12.r The staff seemed more interested in testing what I hadmemorised than what I had understood.

5 13 27 33 21 54 26 56

19.r Too many staff asked me questions just about facts. 2 7 37 38 17 55 31 45Generic Skills2. The course developed my problem-solving skills. 2 8 23 46 22 68 39 475. The course sharpened my analytic skills. 2 7 22 46 24 70 42 469. The course helped me develop my ability to work as a team

member.8 17 26 34 15 49 16 58

10. As a result of my course, I feel confident about tacklingunfamiliar problems.

3 10 31 42 15 57 28 48

11. The course improved my skills in written communication. 3 9 18 40 29 69 42 5222. My course helped me to develop the ability to plan my own

work.2 7 23 45 22 67 39 47

Overall Satisfaction25. Overall, I was satisfied with the quality of this course. 3 7 22 47 22 69 38 48Ungrouped Item16.n The assessment methods employed in this course required

an in-depth understanding of the course content.3 11 30 42 15 57 28 48

Notes1. Graduates with pass bachelor degrees, honours bachelor degrees or three-year undergraduate diplomas.2. Means are calculated after recoding the responses 1, 2, 3, 4 and 5 to -100, -50, 0, 50 and 100 respectively3. Items marked with an r are reverse-scored in analyses to allow for their negative phrasing.4. Item 16 does not fit statistically in any of the scales.5. The full wording of the items and the format of the questionnaire is shown in Appendix 1.

Page 18: Course Experience Questionnaire 2000

The 2000 CEQ Report

8

Figure 2.1 Percentage Agreement with CEQ 2000 Items

69

70

69

68

67

57

49

60

55

54

50

39

35

29

57

53

49

46

50

48

45

43

40

34

0 10 20 30 40 50 60 70 80

...satisfied with the quality of this course.

Overall Satisfaction

...sharpened analytic skills.

...improved skills in written communication.

...developed problem-solving skills.

...ability to plan my own work.

...tackling unfamiliar problems.

...work as a team member.

Generic Skills

...needed was a good memory.

...questions just about facts.

...testing what I had memorised.

Appropriate Assessment

...enough time to understand.

The workload was too heavy...

The sheer volume of work ...

...a lot of pressure on me to do well in this course.

Appropriate Workload

...clear idea of what was expected .

...easy to know the standard expected

...hard to discover what was expected.

...staff made it clear right from the start.

Clear Goals & Standards

...make their subjects interesting.

�motivated me to do my best work.

...gave me helpful feedback on how I was going.

�extremely good at explaining things.

...understand difficulties I might be having with my work

�a lot of time into commenting on my work.

Good Teaching

Page 19: Course Experience Questionnaire 2000

Patterns and Trends

9

The standard deviations of the responses to the items, and the number of bachelor degreegraduates answering each item, are also shown in Table 2.1. The standard deviation indicatesthe spread of the responses to an item, with a larger standard deviation corresponding to awider range of responses3.

Groups of Items or Scales

The CEQ is not just a compilation of items. The items form groups that represent underlyingdimensions of course experiences. There are two reasons for using a summary statistic torepresent responses to groups of related items. The first reason is that such summariesprovide parsimony in analysis and reporting. Rather than reporting on 24 different items itbecomes possible to report patterns for five dimensions. Such a reduction of the data can beof considerable assistance in the process of making inferences about trends and patterns. Thesecond reason for using summary statistics is that they use the relationships between items toconfirm the meaning of the item and reduce the effect of any idiosyncrasies.

Two summary statistics for groups of items are used in this report: the scale mean and themean percentage agreement. It can be seen that they relate to the two summary statistics foritems discussed in the previous section. The scale mean is useful in some forms of analysiswhere continuous measures are important (such as in various correlation and multivariateanalyses) and percentage agreement is useful in representing differences between groups.Scale means are the average of the item ratings for the group of items making up the scale.The scale means provide the most reliable indicator for the group of items because they makeuse of the full distribution of responses to each item4.

Mean percentage agreement refers to the average across the items in a group of the percentageof respondents in a group agreeing or strongly agreeing with the item. It follows that this iscomputed for groups of respondents rather than for individual respondents. The correspondingmeasure for an individual would be the number of items with which they were in agreement.This measure at individual level is less reliable than the scale means, especially for scalescontaining only a few items, because the distribution of responses has not been utilised.

Table 2.2 shows the mean, percentage agreement, and percentage broad agreement for each ofthe five CEQ scales and the Overall Satisfaction by the level of the course of the graduate.The values for postgraduate research students are usually higher than for bachelor students onall measures and sometimes markedly higher. For instance, on the Good Teaching Scale, 59%of graduates of research degree courses agreed with the Good Teaching Scale items, comparedwith 43% of degree graduates. Differences are even larger on the Appropriate AssessmentScale but minimal on Clear Goals and Standards.

3 By way of illustration, responses to item 8 were spread more widely than responses for other items. The

standard deviation was 59 with the percentages in each category from strongly disagree to strongly agreebeing approximately six, 14, 20, 32 and 28 per cent respectively. In contrast, responses to item 19 had asmaller standard deviation of 45. The responses to this item were clustered together at one end of the scalebeing two, seven, 37, 38 and 17 per cent in each of the categories from strongly disagree to strongly agreerespectively.

4 It is possible to introduce greater precision by weighting each item in the scale according to the extent towhich it contributes to the underlying dimension. This has not been done because the weights would bedifferent for each set of data that was used.

Page 20: Course Experience Questionnaire 2000

The 2000 CEQ Report

10

Table 2.2 Scale Scores by Level of Course: CEQ 2000

Qualification Scale Mean StandardDeviation

PercentageAgreement

PercentageBroad

Agreement

Number of

Respondents

Research DegreesGood Teaching Scale 31 48 59 83 2053

Clear Goals & Standards Scale 23 46 54 79 2092Appropriate Workload Scale 16 36 46 78 1933

Appropriate Assessment Scale 62 42 78 93 2033Generic Skills Scale 47 36 73 88 2099

Overall Satisfaction Item 49 52 75 90 2063

Coursework PostgraduateGood Teaching Scale 20 41 49 81 21523

Clear Goals & Standards Scale 23 40 53 82 21549Appropriate Workload Scale 10 37 41 76 20631

Appropriate Assessment Scale 47 41 71 91 21522Generic Skills Scale 27 36 58 84 21563

Overall Satisfaction Item 39 50 70 89 21453

Bachelor DegreesGood Teaching Scale 13 41 43 77 58365

Clear Goals & Standards Scale 21 38 51 82 58420Appropriate Workload Scale 6 38 38 73 56822

Appropriate Assessment Scale 30 43 57 84 58403Generic Skills Scale 34 34 63 87 58438

Overall Satisfaction Item 38 48 68 90 58112

Other QualificationsGood Teaching Scale 17 41 46 80 907

Clear Goals & Standards Scale 19 40 48 81 911Appropriate Workload Scale 13 36 43 78 909

Appropriate Assessment Scale 36 40 63 89 906Generic Skills Scale 26 34 56 84 911

Overall Satisfaction Item 43 48 71 91 909

TotalGood Teaching Scale 15 41 45 79 82848

Clear Goals & Standards Scale 21 39 52 82 82972Appropriate Workload Scale 7 38 39 74 80295

Appropriate Assessment Scale 35 43 61 86 82864Generic Skills Scale 33 35 62 86 83011

Overall Satisfaction Item 39 49 69 90 82537

Notes1. Based on graduates who responded to at least one CEQ item, and whose level of qualification was known.2. Information on both majors included where available.3. Percentage agreements are based on Agree and Strongly Agree categories.4. Percentage broad agreements are based on the top three categories.5. Scale means are calculated after recoding the responses 1, 2, 3, 4 and 5 to -100, -50, 0, 50 and 100

respectively.

Page 21: Course Experience Questionnaire 2000

Patterns and Trends

11

The CEQ was developed primarily for use with students undertaking studies for an initialqualification based on coursework. A number of the items assume that the respondents havecompleted a qualification by meeting the requirements of a �course�. The notion of a �course�varies between disciplines. Within some fields of study (e.g. an Arts degree) students designtheir own course by choosing a sequence of units of study. Other areas (e.g. professionalcourses such as Engineering and Medicine) tend to be more prescriptive, and this mayinfluence students� opinions.

Trends

The CEQ provides the opportunity to examine the way in which the views of bachelor degreegraduates have changed during an eight-year period. Figures 2.3a to 2.3f show the changes inthe percentage agreement with selected CEQ items for bachelor degree graduates over theseven-year period 1993-2000.

Although the wording of the items has remained relatively unchanged, there as discontinuityin the way respondents answered. Until 1997 graduates were provided with the opportunity tocomment on only one course. From 1997 onwards the questionnaire provided graduates withthe opportunity to comment on two courses. The values in Figures 2.3a to 2.3f from 1997onwards incorporate responses for a second course. Our analysis suggests that this has hadthe effect of changing the level of agreement slightly.

There have also been slight changes in the coverage of the survey over the period 1993-2000and in response rates. Several universities did not participate in the 1993 survey, but for theremainder of the period higher education institutions covering the overwhelming majority ofgraduates have participated in the survey. There are substantial differences in responses to theCEQ items across the different fields of study. Hence changes in the enrolment pattern byfield of study may potentially affect aggregate responses by higher education graduates.

Figure 2.3a shows the change in the percentage of graduates of bachelor courses who agreedor strongly agreed with each of the items in the Good Teaching Scale. The story is one ofstability with a slight increase. The percentage agreement for most items had risen slightly in1996 compared with previous years and increased further in 1997 and 1998. For most itemsafter a small decline in 1999, agreement levels increased in 2000 but not to the 1998 levels.

The story is similar for the percentage agreement with items that are part of the Clear Goalsand Standards Scale. As shown in Figure 2.3b, the values for the 2000 survey are verysimilar to, and marginally higher overall, than the previous highest scores in 1999.

As seen in Figure 2.3c the percentage agreement with items that form part of the AppropriateAssessment Scale, show a systematic decline over the course of the eight years. The effect ofthe change in the questionnaire structure in 1997 appears to have been to increase the level ofpercentage agreement. Even so, the values in 1997, 1998, 1999 and 2000 are below those inthe earlier years of the survey. However, the decline from 1999 to 2000 was not as great asfrom 1998 to 1999 (and in fact on item 8 the trend had reversed but not back to 1998 levels).These results would be consistent with a long term shift in assessment in the higher educationsector towards factual and knowledge content rather than thinking skills.

Page 22: Course Experience Questionnaire 2000

The 2000 CEQ Report

12

Figure 2.3a Percentage Agreement with Items in the Good Teaching Scale: BachelorGraduates, 1993-2000

Figure 2.3b Percentage Agreement with Items in the Clear Goals and Standards Scale:Bachelor Graduates, 1993-2000

0

10

20

30

40

50

60

70

Year of Survey

Mea

n Ag

reem

ent (

%)

Item 3 40.9 40.6 41.4 42.7 46.3 48.2 47.7 48.7

Item 7 27.6 28.0 28.5 29.2 32.3 33.9 32.6 33.5

Item 15 38.2 37.5 37.8 37.9 40.1 41.0 39.2 40.1

Item 17 40.0 39.5 37.5 38.4 40.2 45.7 43.8 44.9

Item 18 34.1 34.0 34.1 35.6 38.9 42.2 42.2 43.0

Item 20 41.6 41.8 42.5 43.9 47.1 47.7 49.0 49.7

1993 1994 1995 1996 1997 1998 1999 2000

0

10

20

30

40

50

60

70

Year of Survey

Mea

n Ag

reem

ent (

%)

Item 1 41.9 42.2 42.6 43.2 47.9 50.0 51.6 52.1

Item 6 51.8 51.3 51.8 53.0 55.3 56.5 56.3 57.8

Item 13r 47.5 47.1 47.1 47.2 48.7 48.6 48.2 48.9

Item 24 36.3 36.7 37.2 38.1 41.4 43.9 44.5 45.4

1993 1994 1995 1996 1997 1998 1999 2000

Page 23: Course Experience Questionnaire 2000

Patterns and Trends

13

Figure 2.3c Percentage Agreement with Items in the Appropriate Assessment Scale:Bachelor Graduates, 1993-2000

Figure 2.3d Percentage Agreement with Items in the Appropriate Workload Scale:Bachelor Graduates, 1993-2000

0

10

20

30

40

50

60

70

Year of Survey

Mea

n Ag

reem

ent (

%)

Item 8r 67.2 66.2 66.9 65.7 65.0 63.7 59.8 60.4

Item 12r 60.3 59.5 60.4 59.6 60.2 59.4 55.2 54.7

Item 19r 61.1 60.3 58.6 56.4 58.3 57.4 55.1 54.6

1993 1994 1995 1996 1997 1998 1999 2000

0

10

20

30

40

50

60

70

Year of Survey

Mea

n Ag

reem

ent (

%)

Item 4r 36.9 36.5 36.3 36.0 36.8 36.9 38.0 39.7

Item 14 45.9 45.1 45.7 46.5 48.8 49.9 48.8 49.7

Item 21r 28.0 28.0 27.3 27.3 27.7 27.1 28.3 29.0

Item 23r 35.4 35.0 34.6 34.7 35.6 35.6 34.0 35.2

1993 1994 1995 1996 1997 1998 1999 2000

Page 24: Course Experience Questionnaire 2000

The 2000 CEQ Report

14

Figure 2.3e Percentage Agreement with Items in the Generic Skills Scale: BachelorGraduates, 1993-2000

Figure 2.3f Percentage Agreement with the Overall Satisfaction Item: BachelorGraduates, 1993-2000

0

10

20

30

40

50

60

70

Year of Survey

Mea

n Ag

reem

ent (

%)

Item 2 64.3 63.5 64.2 64.4 65.5 65.7 67.4 67.8

Item 5 66.9 66.4 66.4 67.0 67.9 68.5 69.9 69.7

Item 9 40.4 41.8 42.0 43.1 43.5 44.4 47.6 49.2

Item 10 53.1 52.8 53.5 54.1 55.1 55.7 56.1 56.8

Item 11 63.7 64.3 65.2 65.8 66.9 67.9 69.0 69.8

Item 22 64.5 63.9 64.5 64.5 65.7 65.5 67.0 67.6

1993 1994 1995 1996 1997 1998 1999 2000

0

10

20

30

40

50

60

70

Year of Survey

Mea

n Ag

reem

ent (

%)

Item 25 62.4 62.1 61.8 63.2 65.2 66.3 67.3 68.3

1993 1994 1995 1996 1997 1998 1999 2000

Page 25: Course Experience Questionnaire 2000

Patterns and Trends

15

Figure 2.3d shows that there has been relatively little change in the level of agreement withitems that form part of the Appropriate Workload Scale -- especially over the past four years.

Figure 2.3e shows the percentage of graduates of bachelor courses who agreed or stronglyagreed with each of the items in the Generic Skills Scale. The picture is one of gradualincrease both in the period 1993-1996 and 1997-2000. Collectively, agreement with the sixitems increased by an average of five percentage points from 1993 to 2000. The biggestincreases appear to have been for item 9 (working in a team) and item 11 (writtencommunication).

Percentage agreement with the Overall Satisfaction item by graduates of bachelor courses hasincreased in the period 1993 to 2000 and was higher for the 2000 survey than any previoussurvey. There is almost a six-percentage point increase across the eight-year period and eventhe results for 1997 to 2000 reflect that systematic increase. University graduates are moresatisfied with their course now than in the early 1990s.

Summary

The pattern of graduate responses to CEQ 2000 items was similar to that identified inprevious surveys. These responses can be summarised as either means (on a defined scale) oras percentage agreement measures for both individual items and groups of items. Differencesin summary statistics for each scale between course levels are in the expected direction ofhigher scores for graduates of higher degrees than bachelor degrees.

There are several trends evident in the CEQ data since 1993. On several scales there has beena small improvement in graduate satisfaction over the period of the survey. This improvementis evident in the Overall Satisfaction item and the scales reflecting good teaching (albeit withsome fluctuations in the last three surveys), clear goals and standards and generic skills. Onthe other hand there has been a decline in scores on the Appropriate Assessment scales thatsuggests a shift in assessment in the higher education sector towards factual and knowledgecontent rather than thinking skills. There has been no change in scores on the workload scaleover the period of the surveys.

Page 26: Course Experience Questionnaire 2000

16

Graduate and Course Characteristics

The views that graduates register through the Course Experience Questionnaire (CEQ) differamong categories of graduates, types of courses, and universities. This chapter exploresaspects of such variation for the Good Teaching Scale and the Overall Satisfaction item.

As has been reported in previous years the results presented in this chapter show that there isrelatively little variation in responses to the Good Teaching Scale and the Overall Satisfactionitem that can be attributed to the characteristics of graduates: their sex, age, ethnicbackground, mode of attendance or employment status. There is, however, substantialvariation in the responses of graduates who were enrolled in different fields of study:accounting, biology, nursing and so on. Further, within fields of study there are sometimesdifferences among the responses of graduates from different universities. These differencesare larger for the Good Teaching Scale than for the Overall Satisfaction item. For bothmeasures, however, the differences among universities are larger than the differencesattributable to the characteristics of graduates.

Characteristics of graduates

Differences in CEQ scores among various categories of graduates are important for at leasttwo reasons. First, any differences may have equity implications. To the extent that thedifferences in responses reflect real differences rather than differences of perception, attitudes,expectations or standards, then different categories of graduates may be experiencing differentqualities of higher education. Second, the greater the differences among categories ofgraduates, the greater the potential for differences among courses to be confounded bydifferences among their enrolment profile.

Table 3.1 shows the mean percentage agreement for the Good Teaching Scale and the OverallSatisfaction item for a number of characteristics of the graduates. The size of the differencesamong the various categories is usually rather small. A difference of one percentage point orless has been considered to represent no substantive difference. A difference of more than onebut less than five percentage points has been described as a very small difference, a differencegreater than five but less than ten percentage points has been described as a small difference,and a difference of ten percentage points or more has been described as moderate difference.

One of the few large differences associated with graduate background concerns age. Oldergraduates (over age 40) record higher levels of agreement with items on the Good TeachingScale (by 15 percentage points) and with the Overall Satisfaction item (by nine percentagepoints) than younger graduates. However only about 10 per cent of graduates are over the ageof 40. The question remains as to whether older graduates experience different teaching orwhether they are able to appreciate better the approaches adopted by their teachers.

Graduates of a non-English speaking background recorded slightly greater satisfaction but thedifference was very small (the difference was just three percentage points) on each of the CEQmeasures.

Page 27: Course Experience Questionnaire 2000

Graduate and Course Characteristics

17

Table 3.1a Percentage Agreement with the Good Teaching Scale and the OverallSatisfaction Item by Selected Graduate and Course Characteristics: BachelorDegree Graduates, 2000

Percentage Percentage Agreementof Good Teaching Overall Satisfaction

Respondents Scale Item

All persons 100.0 42.5 68.1(N = 50,455)Sex Male 37.7 42.0 68.7(N = 50,348) Female 62.3 42.9 67.8

Age 24 & under 61.2 40.8 68.2(N = 50,273) 25-29 15.3 41.8 66.1

30-39 13.1 43.7 67.140-54 9.2 50.9 71.055 & over 1.1 63.7 81.5

Non-Eng.-speaking NESB 21.3 40.2 65.6background ESB 78.7 43.1 68.8(N = 50,218)Disability Disability 4.9 45.0 68.9(N = 48,612) No disability 95.1 42.4 67.9

Level of previous Post-graduate 3.2 44.9 67.2qualification Bachelor 17.6 46.5 68.9(N = 45,450) Sub-bachelor 10.5 42.4 66.3

High school 56.5 41.2 68.6Other 8.3 44.8 68.5No previous qual. 4.0 42.4 64.7

Mobility No movement 53.2 41.6 67.5(N = 23,496) Changed city 31.0 42.9 67.8

Changed state 15.8 45.2 69.3

Fee payment HECS 77.7 42.9 68.6(N = 50,168) Aust fee-paying 7.9 42.5 67.9

O�seas fee-paying 13.0 40.2 65.6Other 1.3 42.6 65.8

Attendance Full-time internal 73.0 42.5 67.8(N = 48,134) Part-time internal 13.4 44.0 68.6

External 13.7 41.3 68.9

Page 28: Course Experience Questionnaire 2000

The 2000 CEQ Report

18

Table 3.1b Percentage Agreement with the Good Teaching Scale and the OverallSatisfaction Item by Selected Graduate and Course Characteristics: BachelorGraduates, 2000

Percentage Percentage Agreementof Good Teaching Overall Satisfaction

Respondents Scale Item

Employment in None 23.0 44.4 68.9final year Full-time 19.2 39.1 66.8(n = 50,455) Part-time 57.8 42.9 68.2

Field of study Agriculture 1.6 45.3 73.9(n = 50,455) Architecture, building 2.3 37.3 54.4

Arts, Hum. & Soc. Sci. 24.8 52.2 72.1 Comm. & journalism 47.4 66.4 Psychology 39.5 67.7Business, admin., econ. 23.8 36.6 68.8 Accounting 32.2 67.7 Business admin. 40.3 70.7 Marketing & distn 40.2 74.2Education 9.2 43.4 62.6 Teacher edn-primary 40.0 60.4Engineering, surveying 5.5 31.5 65.2Health 13.6 37.7 61.0 Nursing - initial 35.4 53.0Law, legal studies 3.6 38.8 69.7 Law 33.2 69.5Science 15.3 45.0 71.8 Biology 51.0 76.7 Computer science 35.7 67.6Veterinary Science 0.3 48.8 84.8

Activity in 2000 Work, full-time 53.8 39.6 67.5(n = 50,455) seeking other job 7.7 41.3 65.2

Work, part-time 13.9 48.7 71.9 seeking full-time 8.0 43.7 65.2Unemployed 8.7 44.4 66.9Not in lab. force, study 1.5 45.3 68.4Not in labour force 6.3 50.3 73.9

Notes1. Based on responses to the first major only to avoid double counting of background variables

Page 29: Course Experience Questionnaire 2000

Graduate and Course Characteristics

19

There are no substantial differences between male and female graduates in their views ofGood Teaching and Overall Satisfaction. Females are a little more likely to report havingexperienced aspects of Good Teaching than males while males are a little more likely to reportbeing satisfied overall with their course than are females.

Graduates who reported having a disability of some sort (motor, sensory or other) had amarginally higher level of percentage agreement for the Good Teaching Scale than othergraduates, but there was no real difference in terms of the level of Overall Satisfaction.

Graduates with a previous bachelor degree had higher scores on the Good Teaching Scale thanthose with only Year 12 entry (the difference was five percentage points) but there was nodifference in terms of overall satisfaction.

Those who had moved between States during their course had very slightly higher scores onthe Good Teaching Scale and the Overall Satisfaction item but the difference was very small:less than three percentage points.

There was no real difference in either Good Teaching or Overall Satisfaction scores betweengraduates who paid in different ways (HECS or fee paying) for their course although there wasa very small difference between overseas fee paying students and other students (less thanthree percentage points).

Graduates who had attended mainly as part-time on-campus students were less likely to reporthaving experienced good teaching and less likely to be satisfied than other graduates.Graduates who had enrolled externally, however, had higher scores on both CEQ measuresthan did other graduates.

Graduates who had been employed full-time during the final year of their course hadmarginally lower results for the Good Teaching Scale and the Overall Satisfaction item. Thedifference between full-time work and no work was five percentage points on the GoodTeaching Scale and two points on the Overall Satisfaction measure.

Higher levels of labour market participation after completion of a course did not appear to berelated to more positive assessments of the course. Instead, graduates who were either not inthe labour force (and not studying) or employed part-time but seeking full-time work hadsomewhat higher scores (by between five and ten percentage points on the Good TeachingScale) than did other graduates.

Fields of Study

Field of study is related to CEQ scores for several possible reasons. First, disciplines mayhave their own cultures and approaches to teaching. Second, subject matter may lend itself todifferent forms of exposition. Third, different types of students may be attracted to differentfields of study. In addition, there may be differences in the demands placed on students.

Table 3.1 shows that there are sometimes quite large differences in the mean percentageagreement with the Good Teaching Scale and the Overall Satisfaction item for graduates ofcourses in various fields of study. The mean percentage of graduates agreeing with items onthe Good Teaching Scale differs substantially among the broad fields of study. For instance,there is a 15-percentage point difference between the value for Business courses (35.4%) andfor Arts courses (50.5%). Interestingly, however, the corresponding values for OverallSatisfaction (68.2% and 70.1%) show a smaller difference.

Page 30: Course Experience Questionnaire 2000

The 2000 CEQ Report

20

Figure 3.1 Percentage Agreement with the Good Teaching Scale by Selected Fields ofStudy: Bachelor Graduates, CEQ 2000

Although there are sometimes large differences between broad fields of study, there are alsooften large differences between minor fields of study within those broad fields of study. Forinstance, within Arts, the mean agreement for Good Teaching for graduates of courses inCommunication and Journalism is nearly seven percentage points higher than for graduates ofPsychology courses, or within Science, the value for Good Teaching is nearly 11 percentagepoints higher for graduates of Biology courses than for graduates of Computer Sciencecourses. Again, however, the differences in Overall Satisfaction are usually somewhat lessmarked.

Figures 3.1 and 3.2 present the values for the 10 minor fields of study from Table 2.1. Thesefields of study were selected because they contained substantial numbers of graduates, weretaught in many universities, and covered a diversity of subject-matter. The graphicalpresentation in Figure 3.1 highlights the extent of the variation in mean agreement with theGood Teaching Scale across minor fields of study. The difference between Biology andAccounting, for instance, is over 17 percentage points. Similarly Figure 2.2 shows there adifference of nearly 20 percentage points for values for Overall Satisfaction betweengraduates of courses in Biology and Initial Nursing.

Comparisons of different courses within a particular university are unlikely to be of like withlike. Instead there are substantial differences in graduates� experiences of courses that appearto be linked to the subject matter of the course itself. Instead, comparisons are more likely tobe fair if they are made within similar courses between universities. The next section exploressuch differences for each of the 10 minor fields of study presented in Figures 3.1 and 3.2.

36

40

39

48

36

32

32

41

41

51

0 10 20 30 40 50 60

Nursing - initial

Tchr Edn - primary, initial

Psychology

Communications & Journalism

Computer Science - general

Accounting

Law, Legal Studies - professional

Business Administration

Marketing & Distribution

Biology

Fiel

d of

Stu

dy

Mean Percentage Agreement Good Teaching Scale)

Page 31: Course Experience Questionnaire 2000

Graduate and Course Characteristics

21

Figure 3.2 Percentage Agreement with the Overall Satisfaction Item by Selected Fieldsof Study: Bachelor Graduates, CEQ 2000

Universities

The purpose of the CEQ is to capture graduates� experiences of their courses. For that reasonit is important that variation in scores be explained by aspects of the course (eg. field of studyor institution) rather than by personal characteristics of the graduate. Certainly some of thevariation in responses to the Good Teaching Scale and the Overall Satisfaction item could beexplained by the background characteristics (sex, age etc.) of the respondents. The results inTable 3.1 indicated that the effects of most background characteristics (with the exception ofage) are small compared with those associated with field of study. For 10 minor fields ofstudy, Tables 3.2 and 3.3 show the percentage of the variance in the Good Teaching Scale andthe Overall Satisfaction item explained by selected background variables, field of study andinstitution. It is not possible from the CEQ data to link scores to particular courses (althoughthat is possible within some universities by the institutions themselves). The analysis inTables 3.2 and 3.3 refer to fields of study rather than courses.

The Good Teaching Scale

Table 3.2 records the percentage of the variance in individual scores on the Good TeachingScale �explained by� various factors for each of ten fields of study. Two values are shown foreach field of study. The value in the first row for a given field of study shows the percentageof variation explained by the university a graduate attended. The value in the second rowshows the percentage of variation explained by the university a graduate attended adjusted forthe influence of the other background characteristics.

53

60

67

67

67

67

68

70

74

76

0 10 20 30 40 50 60 70 80

Nursing - initial

Tchr Edn - primary, initial

Psychology

Communications & Journalism

Computer Science - general

Accounting

Law, Legal Studies - professional

Business Administration

Marketing & Distribution

Biology

Fiel

d of

Stu

dy

Mean Percentage Agreement (Overall Satisfaction)

Page 32: Course Experience Questionnaire 2000

The 2000 CEQ Report

22

Table 3.2 Percentage of Variance in the Good Teaching Scale Explained by SelectedGraduate and Course Characteristics: Ten Specific Fields of Study, BachelorGraduates, CEQ 2000

Field of Study Sex Age NESB Attendance Fees Activity University % % % % % % %

Accounting 0.2 2.2 0.1 0.3 1.1 0.3 6.3 (38/4115) 0.2 2.4 0.0 0.4 0.7 0.1 6.2

Biology 0.2 5.1 0.0 0.1 0.2 1.6 3.7 (25/949) 0.1 5.1 0.0 0.0 0.1 1.7 3.7

Business admin 0.0 2.6 0.1 0.0 0.2 0.5 8.2 (26/1536) 0.0 2.5 0.0 0.0 0.2 0.5 7.8

Comm. & journalism 0.5 4.5 0.1 0.3 0.0 0.7 7.2 (24/1353) 0.3 3.8 0.1 0.1 0.1 0.6 6.2

Computer science 0.2 2.5 0.2 0.1 0.1 0.9 6.3 (35/1814) 0.3 2.7 0.3 0.3 0.1 0.9 6.2

Law 0.5 2.6 0.0 0.0 0.3 1.3 10.4 (24/1331) 0.4 2.5 0.0 0.0 0.3 1.3 10.4

Marketing & distribn 0.2 1.8 0.0 0.3 0.2 0.5 8.5 (34/2271) 0.2 1.7 0.0 0.2 0.2 0.4 8.8

Nursing - initial 0.0 3.5 0.3 0.3 0.7 0.8 6.6 (27/2280) 0.0 3.2 0.4 0.1 0.8 0.7 6.6

Psychology 0.3 3.0 0.1 0.0 0.3 0.8 6.2 (33/2446) 0.3 3.0 0.1 0.1 0.3 0.7 5.7

Teacher edn - primary 0.1 4.8 0.2 0.1 1.3 1.2 9.5 (29/1699) 0.1 4.4 0.1 0.0 1.1 1.3 8.8

Notes

1. The first row for each field of study contains the percentage of variation in percentage agreement for theGood Teaching Scale that can be attributed to the corresponding variable (sex, age, etc) when that variable isconsidered by itself. All values are corrected for degrees of freedom.

2. The second row for each field of study contains the unique percentage of variation in percentage agreementfor the Good Teaching Scale that can be attributed to the corresponding variable after the variation associatedwith the other variables was removed. Nested Ordinary Least Squares equations were used. All values arecorrected for degrees of freedom. Negative values were converted to zero.

3. The values in parentheses are first the number of universities and then the number of respondents for eachfield of study.

4. Responses to both first and second major are used as appropriate.5. Universities with fewer than 10 responses for the relevant field of study were omitted.

Page 33: Course Experience Questionnaire 2000

Graduate and Course Characteristics

23

Those results indicate that the university the graduate attended explains an average of 7% ofthe variance in scores. The effect of university attended is greatest for Law (10%) and least forBiology (4%). It is interesting that the percentage of variance explained is highest for Law andand Primary Teacher Education where the alignment between field of study and course isprobably greatest.

Primary Teacher Education is a field where the alignment between field of study and course ishigh. For initial primary Teacher Education, the university attended explains 9.5% of thevariance in the Good Teaching Scale. Although this may not seem a large proportion itcorresponds to a correlation coefficient of 0.31. In most survey research this would beconsidered a moderately high correlation. In fact the value probably understates the strengthof the association because of the measurement errors involved. The value in the second row is8.8%. The difference from the value in the first row reflects the contribution of other factors,especially age.

Among the other factors in the analysis only age �explains� a significant amount of variance(typically 3%). Factors such as gender, ethnic background, mode of attendance, fee status orpost-graduation labour market participation had a relatively slight impact. Hence, for PrimaryTeacher Education the university attended by a graduate explains a substantial amount of thevariation in responses to the Good Teaching Scale, graduate characteristics explain arelatively small amount of this variation, and the amount of variation explained is relativelyunaffected by differences among universities in the characteristics of their graduates.

In general the university attended by a graduate explains a substantial amount of the variancein the Good Teaching Scale. Graduate characteristics explain a relatively small amount of thisvariance, and the amount of variance explained is relatively unaffected by differences amonguniversities in the characteristics of their graduates

The Overall Satisfaction item

Table 3.3 presents the corresponding results for the Overall Satisfaction item. The majordifference between the values in Tables 3.2 and 3.3 is the size values -- in almost every case,the absolute sizes of the values in Table 3.3 are less than the corresponding values in Table3.2. This means that responses to the Overall Satisfaction item are less likely to be explainedby the individual characteristics of graduates or by the university they attended. The smallervalues probably in part reflect the lower reliability of a single item measure as well as thesummative (and less focused) nature of that item.

The adjusted percentage of variation in responses to the Overall Satisfaction item explainedby the university a graduate attended is trivial for at least two fields of study -- Accounting(0.6%) and Computer Science (0.6%). The values for Biology, (1.4%), Communication andJournalism (2.8%), Marketing and Distribution (2.3%) and Psychology (2.2%) are fairlysmall. Nevertheless, for the four remaining fields of study, the percentage of variation incourse satisfaction explained by between-universities remains substantial.

Page 34: Course Experience Questionnaire 2000

The 2000 CEQ Report

24

Table 3.3 Percentage of Variance in the Overall Satisfaction Item Explained by SelectedGraduate and Course Characteristics: Ten Minor Fields of Study, BachelorGraduates, CEQ 2000

Field of Study Sex Age NESB Attendance Fees Activity University % % % % % % %

Accounting 0.0 1.8 0.1 0.0 0.0 0.2 2.5

(38/4037) 0.0 1.7 0.1 0.1 0.0 0.2 2.5

Biology 0.0 4.8 0.3 0.0 0.3 2.4 3.2

(25/925) 0.0 4.8 0.3 0.0 0.2 2.2 3.1

Business admin 0.1 3.4 0.9 0.0 1.0 0.4 4.6

(26/1486) 0.0 3.2 0.6 0.0 0.5 0.2 4.3

Comm. & journalism 0.0 3.5 0.0 0.0 0.3 0.6 4.7

(24/1323) 0.0 3.6 0.1 0.1 0.4 0.6 4.4

Computer science 0.1 2.9 0.2 0.0 0.1 0.7 3.9

(34/1746) 0.1 2.9 0.2 0.0 0.1 0.6 3.9

Law 0.0 2.4 0.0 0.0 0.1 0.5 5.2

(24/1302) 0.0 2.4 0.1 0.1 0.1 0.5 5.3

Marketing & distribn 0.0 1.8 0.3 0.0 0.2 0.3 3.5

(32/2204) 0.0 1.8 0.3 0.0 0.4 0.3 3.6

Nursing - initial 0.0 2.6 0.2 0.0 0.8 0.9 4.1

(26/2189) 0.0 2.6 0.3 0.0 0.9 0.9 3.8

Psychology 0.0 2.1 0.0 0.0 0.2 0.9 3.9

(33/2398) 0.0 2.1 0.0 0.0 0.2 0.9 3.9

Teacher edn - primary 0.0 3.2 0.0 0.0 0.3 0.7 4.7

(29/1619) 0.0 3.4 0.0 0.1 0.3 0.7 4.8

Notes

1. The first row for each field of study contains the percentage of variation in percentage agreement for theOverall Satisfaction Item that can be attributed to the corresponding variable (sex, age, etc) when thatvariable is considered by itself. All values are corrected for degrees of freedom.

2. The second row for each field of study contains the unique percentage of variation in percentage agreementfor the Overall Satisfaction Item that can be attributed to the corresponding variable after the variationassociated with the other variables was removed. Nested Ordinary Least Squares equations were used. Allvalues are corrected for degrees of freedom. Negative values were converted to zero.

3. The values in parentheses are first the number of universities and then the number of respondents for eachfield of study in the analysis.

4. Responses to both first and second major are used as appropriate.5. Universities with fewer than 10 responses for the relevant field of study were omitted.

Page 35: Course Experience Questionnaire 2000

Graduate and Course Characteristics

25

Figure 3.3 Mean Percentage Agreement with the Good Teaching Scale by University:Bachelor Psychology Graduates, CEQ 2000

Mean Percentage Agreement Scores For Universities

It is possible to represent the extent to which institutions vary by plotting the mean percentageagreement scores within designated fields of study. Psychology was a field of study in whichthe amount of variance attributable to the university attended was about average. Figures 3.3shows the distribution among universities of the mean percentage agreement for Psychologygraduates on the Good Teaching Scale. Leaving aside five institutions with 20 or fewerrespondents the institution means for the percentage agreement score range from 25.8 to 53.5.In other word the percentage agreement for the top institution is twice that for the bottominstitution. Figure 3.3 shows that there are quite large differences among universities in theextent to which their graduates agree with items on the Good Teaching Scale.

Figure 3.4 displays the distribution of mean percentage agreement scores for psychologygraduates on the Overall Satisfaction item but in a different format. In Figure 3.4 the data arerepresented as within groups of scores. Again the range of percentage agreement scores(leaving aside five universities with fewer than 20 respondents) is considerable: from 46.8 to84.5.

Initial primary teacher education was a field in which institutional differences contributed arelatively larger percentage in overall scores on the Good Teaching Scale. Figure 3.5represents the distribution of institutional scores within that field of study.

2626

3031313132

3334

363737

3838383839

39404141

424343

4446

4647

4953

55

20 25 30 35 40 45 50 55 60Mean % Agreement

Page 36: Course Experience Questionnaire 2000

The 2000 CEQ Report

26

Figure 3.4 Mean Percentage Agreement with the Overall Satisfaction Item byUniversity: Bachelor Psychology Graduates, CEQ 2000

It can be seen in Figure 3.5 that there is a considerable range in institutional means on theGood Teaching Scale. The range (institutions with fewer than 20 graduates have beenexcluded) was from 22.8 to 67.2. The interquartile range was 16 percentage points (from 35 to51 per cent).

Figure 3.5 Mean Percentage Agreement with the Good Teaching Scale by University:Bachelor Graduates in Initial Primary Teacher Education, CEQ 2000

2328

303131

3435

3740404041

424343

4444

4749

5051

5152

5254

5561

6767

20 25 30 35 40 45 50 55 60 65 70Mean % Agreement

2 2

6

4

6

8

2

0

1

2

3

4

5

6

7

8

9

45-49 50-54 55-59 60-64 65-69 70-74 75-84Mean Percentage Agreement (Overall Satisfaction)

Num

ber o

f Ins

titut

ions

in S

core

Cat

egor

y

Page 37: Course Experience Questionnaire 2000

Graduate and Course Characteristics

27

Figure 3.6 Mean Percentage Agreement with the Good Teaching Scale by University:Bachelor History Graduates, CEQ 2000

Figure 3.6 displays the range of institutional means for history graduates on the GoodTeaching Scale. In this case the distribution is a little less than for initial primary teacher

education. However, the range is 30 percentage points and the interquartile range is 11percentage points (from 62 per cent to 73 per cent).

Summary

Three characteristics of graduates and their courses are associated with differences in CEQscores for the Good Teaching Scale and the Overall Satisfaction item. First, it did appear thatolder graduates rated their courses more favourable than did younger graduates. However thiswas observed for graduates over the age of 40 years and such graduates constitute only 10 percent of the population of Bachelor degree graduates. The inference from this is that courseswith substantial proportions of older graduates will tend to have higher levels of satisfaction.

Second, there were differences among field of study. Generally, graduates from the humanitiesand social science fields of study recorded higher scores on the Good Teaching Scale and theOverall Satisfaction item than did graduates from science fields of study. However, there weresubstantial differences between specific fields of study with broad fields. This prompts thequestion as to whether there are differences in approaches to teaching among fields of studythat could usefully be evaluated. It also indicates that comparisons between institutions needto make allowance for differences in the enrolment profile of the institutions.

Thirdly, the analyses in this chapter indicate that within fields of study there are substantialdifferences between institutions in terms of Good Teaching Scores and Overall Satisfaction.There could be a variety of explanations for these differences but they do invite furtherinvestigation.

5254

5659

626363

656666

6869

7171

7272

7376

7779

82

40 45 50 55 60 65 70 75 80 85Mean % Agreement

Page 38: Course Experience Questionnaire 2000

28

Using the CEQ as Stimulus to Course Improvement

In most conceptions of universities, high quality scholarship, research and teaching arecentral. Within these conceptions university teaching is considered to be concerned withsupporting learning that promotes understanding of deeper structures and concepts in an areaof knowledge. Over the past decades universities have been challenged to sustain high qualityteaching and learning at a time of rapid expansion in student numbers, greater demand forplaces by a wider range of young people, constraints on resources and an emphasis onresearch productivity. In response, universities have made use of different types of specialisteducation units to provide advice and support for teaching staff. They have used informationprovided by students through course evaluation questionnaires to indicate where attention ismost needed and to help staff identify aspects of teaching that need improvement. A numberof institutions make use of detailed analysis of their own CEQ results as part of a range ofstrategies to improve undergraduate teaching. Broader reviews of teaching in variousdisciplines have also provided a stimulus to enhance university teaching.

In this chapter attention is given to the use of the CEQ as a stimulus to course improvement.That was, after all, the intention of those who first developed the questionnaire in the early1980s. The focus of the chapter is on the Good Teaching Scale. It begins by consideringevidence from other investigations and then examines data from the current and previous twoyears to show that, carefully considered, the instrument can point to areas where action isneeded.

The CEQ and Other Aspects of Learning

The CEQ is best regarded as a broad indicator of the quality of learning environments. It canpoint to areas where further exploration is needed but it does not of itself suggest what shouldbe done in response to those results. Its interpretation requires professional judgement thatexplores the range of possible influences that might result in the observed pattern. The GoodTeaching Scale is the focus of the discussion in this chapter. Its psychometric properties arevery sound and it is at the heart of the process of teaching and learning.

Approaches to Learning

From the results of other research studies it appears that scores on the Good Teaching Scaleare associated with approaches to learning. Wilson et al (1997) report a correlation coefficientof 0.24 between a Deep Approach to Learning measure (of the Approaches to StudyInventory) and the Good Teaching Scale. This is what would be expected on the basis of thetheory around which the CEQ was developed. Trigwell and Prosser (1991) report similarresults from a different sample of students. A next step would be to extend these analyses toincorporate a multilevel conception so that the CEQ scales were seen as reflecting theinstitutional environment in a discipline, and the study approaches were seen as operating atthe individual level. Wilson et al (1997) also reported a correlation coefficient between theGood Teaching Scale and student academic achievement of 0.47 for Good Teaching. Thisprovokes a question as to whether those with higher scores on the CEQ learn more effectivelyand therefore achieve higher grades or whether those who receive higher grades feel a higherlevel of satisfaction with their course. It is possible that the relationship is reciprocal.

Page 39: Course Experience Questionnaire 2000

The CEQ as Stimulus to Course Improvement

29

Course Orientation

Wilson et al (1997) report two analyses using data from the 1993 and 1994 national CEQsurveys. In each case the focus was on comparing CEQ results for programs known to haveadopted new approaches to teaching and learning with other programs in the same field. Oneof the analyses was based on graduates of medical programs and the other involvedpsychology. The first of these investigations considered the graduates from ten medicalprograms. One of these programs had adopted a problem-based approach to its teachingwhereas the others had followed a more traditional approach. Data from the 1993 and 1994surveys of graduates were analysed. There was no significant difference on the GoodTeaching Scale between the approaches but graduates from the problem-based program hadsignificantly higher scores than other programs on the Clear Goals and Standards Scale andthe Generic Skills Scale. Graduates from those programs also scored higher than most otherprograms on the Appropriate Assessment Scale. A second investigation compared thegraduates of an experiential and action learning psychology program with other psychologygraduates in the 1993 and 1994 surveys. Graduates from the experiential program recordedthe highest scores on Good Teaching and Generic Skills in both 1993 and 1994 as well asbeing highest and third highest in the 1993 and 1994 surveys respectively. There is a need tobuild a wider body of case knowledge through further investigations such as these but theseexamples suggest that the instrument is sensitive to differences in approach.

Innovative Practice

A national seminar in 1996 involved key staff from courses that had scored at or near the topof the scale on CEQ scales outlining what they did to support good teaching and learningpractices (Griffith University, 1997). Those presentations indicated approaches and strategiesthat had been part of programs that were regarded as successful by graduates. CEQ data hadresulted in the selection of programs in which it was possible to identify various strategies andapproaches that had been effective. It would be constructive for a continuing dialogue amonginstitutions to be established and for regular descriptions of practice in courses where the CEQindicates that the quality of teaching and learning is well regarded by graduates.

Inferring reasons for differences in scores requires judgement and a will to follow throughwith other forms of data analysis in particular departments and faculties in the examplesdescribed by Wilson et al (1997). Determining and implementing improvement processes thatwill be appropriate in particular contexts requires a different set of judgements and inferences;but those judgements are more likely to result in success if they are based on informationabout what appears to work in comparable programs elsewhere. The value of the CEQ isenhanced by its use as a national indicator of teaching-learning environments in highereducation over a number of years. A national perspective provides comparative data as abasis for consideration of what is done in courses and programs where high CEQ sores arerecorded (perhaps invoking the term �best practice�). A perspective such as this is facilitatedby the full publication of results for institutions within each field of study.

Institutional Patterns for Initial Primary Teacher Education

In a previous section of the report it has been noted that there are differences betweeninstitutions within fields of study. It has also been noted that characteristics of graduates otherthan their field of study and institution have little influence on their CEQ responses. In this

Page 40: Course Experience Questionnaire 2000

The 2000 CEQ Report

30

section institutional differences on the Good Teaching Scale are examined in greater detail.Information from the Good Teaching Scale could provide a basis for individual institutions torespond to areas of concern by gathering additional information at local level and introducingchanges. At a broader level analysis of Good Teaching scores by institution could also beused to identify the location of good practice. Both of these can be thought of as formativepurposes directed towards the improvement of undergraduate teaching.

In this analysis the mean percentage agreement for the scale has been used as the summarystatistic for each institution within a field of study (since it is more easily interpreted andunderstood). In addition to examining the mean percentage agreement a confidence intervalbased on the standard error was estimated5. The confidence interval for the comparison ofmeans was computed as 1.39 times the standard error and a difference between means thatwas greater than the sum of these intervals was considered statistically significant at the fiveper cent level. This provides a guide to the certainty that the difference is a real difference.However, it does not mean that instances where there is slight overlap should be dismissed asof no consequence or that where there is no overlap one can be completely certain that thedifference is real. In terms of implications for course improvement there is no reason toassume that differences that just fail to reach significance should be ignored and those thatthat do reach significance should prompt an automatic response. One reason for this caution isthat the data do not constitute a properly constructed sample but respondents from apopulation. In these circumstances, and more generally, replication will provide an additionalguide as to whether observed differences represent underlying real differences or have arisenby chance or through bias. Of course it goes without saying that the significance level reflectsthe probability that the difference arose by chance. It does not necessarily reflect themagnitude of the difference. A large difference between two means based on small numbersof respondents from institutions with a wide internal variability might not be significant at the5 per cent level. In contrast a smaller difference between two means with a less internalvariability, large numbers of respondents and a high response rate might be significant.

Good Teaching Scores from CEQ 2000

Figure 4.1 shows the mean percentage agreement on the Good Teaching Scale from CEQ2000 for initial primary teacher education graduates. The analysis has been restricted toinstitutions where there were at least 20 respondents to this scale of CEQ 2000. For each meanit also shows an error bar as ± 1.39 times the standard error. Although few of the differencesbetween institutions are statistically significant there are some differences between the ends ofthe distribution that are substantial. Institutions B, AA and N could be considered on the basisof these data to represent �good practice� (they represent the top three of 29 institutions).From these there can be some confidence that more than 50 per cent of graduates are inagreement with the items of the Good Teaching Scale. Based on Figure 4.1 there are 15institutions that are significantly lower than this level.

5 The standard error for an institution depends upon the dispersion (or variability) of the scores within the

institution and the number of respondents from the institution. The finite population correction was notapplied although there could be an arguable case for this. The effect would be to reduce the standard errorssomewhat. For other reasons also the error calculations used in these analyses may overstate their magnitudeand result in a conservative view of the differences between pairs of institutions.

Page 41: Course Experience Questionnaire 2000

The CEQ as Stimulus to Course Improvement

31

Figure 4.1 Percentage Agreement with the Good Teaching Scale for Initial primaryTeacher Education Graduates: CEQ 2000

There are other comparisons that could be made � perhaps of institutions that have similarcharacteristics in other respects. Moreover the differences could arise from factors other thanteaching practices in the institutions. The important point is that the patterns highlightinstances where further investigation appears warranted followed by reflection and review.

Whether a difference between two institutions is significant depends on the numbers ofrespondents from each and the variability of the responses within each. However forinstitutions of average size it would appear that, in this field of study, a difference in meanpercentage agreement of 10 to 15 percentage points is often statistically significant and worthyof further investigation.

Good Teaching Scores Over Three Years

The data in Figure 4.1 refer to CEQ 2000. Time series data for the CEQ provide additionalperspectives that can be identify areas where course improvement might be warranted. Theresponses of graduates can vary from year to year for a variety of reasons. For the past threeyears of the CEQ (the years 2000, 1999 and 1998) the between-cohort correlation coefficientsfor the Good Teaching Scale for initial primary teacher education ranged from 0.68 to 0.75(using only data where there were 20 or more respondents). This indicates that there is amoderately high level of stability in these scores.

Figure 4.2 records the mean percentage agreement scores on the Good Teaching Scale overthe past three years. From these data it can be seen that institutions M, E and K have beenconsistently at the bottom of the distribution and on that basis some checking against otherdata and review of policies and practices would be warranted. Institutions Y, G and D arealso consistently in the lower quarter of the distribution. Correspondingly the institutiondesignated as ��has been accorded high scores by its graduates as has institution N for 1999and 2000 (there were too few respondents in 1998 for those data to be included).

10

20

30

40

50

60

70

80

M E K L D U Y G I T C BB H S J A R Q P V W BA O X Z F N AA B

Perc

enat

age

Agre

emen

t Goo

d Te

achi

ng S

cale

Page 42: Course Experience Questionnaire 2000

The 2000 CEQ Report

32

Figure 4.2 Mean Percentage Agreements for the Good Teaching Scale: Initial PrimaryTeacher Education CEQ 1998, CEQ 1999, CEQ 2000

Figure 4.2 also provides evidence of fluctuations between years. Institution F has shown asteady improvement over these three years. In contrast institution R has shown a steadydecline. There was a substantial improvement from 1999 to 2000 for institution Z, aremarkable drop in 1999 (recovered in 2000) for institution W, and a drop between 1999 and2000 for institutions L and H. Whether these represent different cohorts progressing throughthe programs, changes in program organisation and practice, staff changes or changes in therepresentativeness of the respondents to the CEQ requires much more detailed localknowledge. However, the data do suggest some consideration be given to the factors thatmight have resulted in the changes.

Responding to CEQ Data

It has been argued through this chapter, and through the report as a whole, that results fromthe CEQ need to be interpreted carefully in the light of local knowledge to be of greatest valuein the process of course improvement. Moreover, the CEQ is an indicator: it suggests whereone might begin but it does not provide a route to course improvement. Because the CEQ doesnot reference absolute levels of performance or standards insights that can stimulate courseimprovement are based on comparative data.

Three types of comparison are possible. The first is comparisons among institutions from onesurvey. In the analysis of data for initial primary teacher education it was suggested that therewas a set of institutions that appeared to represent good practice. In principle it should bepossible to examine the essential features of those courses to determine whether others couldadapt practices from those venues. A more focused version of this comparison would involvea sharing of information about good practices among institutions that were similar in otherways.

A second type of comparison can take place within the same institution over time. In this wayinformation from the CEQ (as well as locally developed instruments) could be used tomonitor the effects of planned and unplanned changes in a course or its context (e.g. thenature of its intake). This type of comparison requires maintaining data over time and

D T J R X BM E K L U Y G I C H S A Q P V W O Z F N �

G TY U O

E M K W I D S P Z A V C X F R L J Q H � N

FX

P AT J R

M E K Y G D S Q O C I Z V U W L H �

Primary Teacher Education

Mean Percentage Agreement

GTS 2000

GTS 1999

GTS 1998

60 7020 30 40 50

Page 43: Course Experience Questionnaire 2000

The CEQ as Stimulus to Course Improvement

33

reviewing those data each year. Within institution analyses of this type can be informed bylinking CEQ data with other information about programs, structures and even graduatecharacteristics.

A third type of comparison combines these first two approaches. In other words informationfrom the CEQ is used across institutions and over time. In the case of initial primary teachereducation discussed in this chapter this two dimensional comparison can provide moreextensive insights than either approach on its own. In both cross-sectional and time-seriesanalyses the uncertainties in survey data need to be recognised so that the possibility that aresult may have arisen by chance or by an extraneous circumstance is considered.

Information from the CEQ does not by itself provide a direct indication of possible actions. Itcan provide a stimulus to course improvement rather than a recipe. Bruck, Hallett, Hood,McDonald and Moore (2001) have described a process of using both survey data aboutstudent satisfaction (not the CEQ but the principle is similar) and qualitative data fromstructured focus groups to build a semester long staff development program. This programwas based on building staff teaching communities through which there were opportunities forstaff to explore aspects of their teaching practice. This exploration included action researchprojects designed to explore issues that had arisen from the data. Bruck et al (2001) report thatthis initiative resulted in significant increases in students� levels of satisfaction with theircourse experience. There is probably a range of different ways in which data such as these canbe used in course improvement provided there is an orientation to continuous courseimprovement as a central issue in higher education.

Page 44: Course Experience Questionnaire 2000

34

Properties of the CEQ

The Scales

Five scales have been presumed to underlie responses to 23 of the items in the CEQ. The 25thitem has always been treated separately as an overall measure of satisfaction and the 16th itemhas never been included in any of the scales -- and its content has changed over time.

The five scales are:

� The Good Teaching Scale (GTS);

� The Clear Goals and Standards Scale (CGS);

� The Appropriate Workload Scale (AWS);

� The Appropriate Assessment Scale (AAS); and

� The Generic Skills Scale (GSS).

The allocation of the items to the scales is shown in Figure 5.1.

The use of scales instead of individual items is intended to both simplify the presentation ofresults and to improve the robustness of the measures. Simplification is achieved bycombining results for several closely related items in a single statistic. Robustness isenhanced because what is being measured does not depend on the particular wording of oneitem but draws strength from the group of items.

Reliabilities of the Scales

The extent to which scales measure the construct behind the items reliably is indicated by areliability coefficient. There are many forms of reliability coefficient but they all can havevalues ranging from 0 (unreliable) to 1 (completely reliable). Reliability coefficients indicatethe extent to which one could expect to obtain the same score on several differentadministrations of the scale and the consistency between the items making up the scale. Table5.1 records two reliability coefficients for each of the CEQ scales6.

Table 5.1 Reliability of the CEQ Scales: Bachelor Degree Graduates

CEQ Scale CronbachAlpha

Composite ScaleReliability

Good Teaching Scale 0.88 .91

Clear Goals and Standards 0.78 .82

Appropriate Workload Scale 0.71 .76

Appropriate Assessment Scale 0.72 .77

Generic Skills Scale 0.78 .84

6 These calculations were performed by Dr. Gerald Elsworth. The composite Scale reliability is calculatedusing the Flieshman formula from the one-factor polychoric correlation model from LISREL

Page 45: Course Experience Questionnaire 2000

Properties of the CEQ

35

Good Teaching Scale (six items)3. The teaching staff of this course motivated me to do my best work.7. The staff put a lot of time into commenting on my work.

15. The staff made a real effort to understand difficulties I might be having with my work17. The teaching staff normally gave me helpful feedback on how I was going.18. My lecturers were extremely good at explaining things.20. The teaching staff worked hard to make their subjects interesting.

Clear Goals and Standards Scale (four items)1. It was always easy to know the standard of work expected.6. I usually had a clear idea of where I was going and what was expected of me in this course.

13. It was often hard to discover what was expected of me in this course.24. The staff made it clear right from the start what they expected from students.

Appropriate Workload Scale (four items)4. The workload was too heavy.

14. I was generally given enough time to understand the things I had to learn.21. There was a lot of pressure on me to do well in this course.23. The sheer volume of work to be got through in this course meant it couldn't all be thoroughly

comprehended.

Appropriate Assessment Scale (three items)8. To do well in this course all you really needed was a good memory.

12. The staff seemed more interested in testing what I had memorised than what I had understood.19. Too many staff asked me questions just about facts.

Generic Skills Scale (six items)2. The course developed my problem-solving skills.5. The course sharpened my analytic skills.9. The course helped me develop my ability to work as a team member.

10. As a result of my course, I feel confident about tackling unfamiliar problems.11. The course improved my skills in written communication.22. My course helped me to develop the ability to plan my own work.

Figure 5.1 Scales and Items of the Course Experience Questionnaire

One of these is coefficient alpha, possibly the most commonly used measure of reliability insocial research. Although it has been argued that values of alpha can be inflated when thereare large numbers of items in a scale, the scales in this case are rather short. Other thingsbeing equal, scales with more items are more reliable. In fact coefficient alpha provides alower bound estimate of the reliability of a scale. The data in Table 5.2 are consistent withthat view. An index that makes full allowance for the fact that responses might not representequal intervals is the Composite Scale Reliability. It will typically yield slightly higherreliability estimates. Of course, measures of the internal consistency cannot be applied to asingle item such as the Overall Satisfaction item.

Structure of the CEQ

The existence of groups of items, and the composition of the groups, relating to commonunderlying dimensions in the CEQ had been established through successive exploratory factoranalyses reported in previous reports of the CEQ (Johnson, 1999). This structure wasconfirmed by analyses of the CEQ 1999 data (Long & Hillman, 2000). In their analysis of the

Page 46: Course Experience Questionnaire 2000

The 2000 CEQ Report

36

CEQ data for the year 1999 confirmatory factor analysis was used to establish that the dataprovided a good fit to the model underlying the CEQ.

Exploratory Factor Analysis

Table 5.2 shows the pattern of factor loadings from which the structure of the instrument canbe inferred.

Table 5.2 Factor Loadings derived from Analysis of CEQ Items: Bachelor DegreeGraduates

No. CEQ Item CEQScale

Factor1

Factor2

Factor3

Factor4

Factor5

17 The teaching staff normally gave me helpful feedback on how Iwas going.

GTS 0.78

7 The staff put a lot of time into commenting on my work. GTS 0.7715 The staff made a real effort to understand difficulties I might

be having with my workGTS 0.76

18 My lecturers were extremely good at explaining things. GTS 0.7020 The teaching staff worked hard to make their subjects

interesting.GTS 0.67

3 The teaching staff of this course motivated me to do my bestwork.

GTS 0.66 0.30

16 n The assessment methods employed in this course required anin-depth understanding of the course content.

GTS 0.38

10 As a result of my course, I feel confident about tacklingunfamiliar problems.

GSS 0.75

2 The course developed my problem-solving skills. GSS 0.735 The course sharpened my analytic skills. GSS 0.71

22 My course helped me to develop the ability to plan my ownwork.

GSS 0.65

11 The course improved my skills in written communication. GSS 0.599 The course helped me develop my ability to work as a team

member.GSS 0.52

1 It was always easy to know the standard of work expected CGS 0.7613 r It was often hard to discover what was expected of me in this

course.CGS 0.71

6 I usually had a clear idea of where I was going and what wasexpected of me in this course.

CGS 0.71

24 The staff made it clear right from the start what they expectedfrom students.

CGS 0.44 0.60

21 r There was a lot of pressure on me to do well in this course. AWS 0.774 r The workload was too heavy. AWS 0.75

23 r The sheer volume of work to be got through in this coursemeant it couldn�t all be thoroughly comprehended.

AWS 0.74

14 I was generally given enough time to understand the things Ihad to learn.

AWS 0.35 0.55

8 r To do well in this course all you really needed was a goodmemory.

AAS 0.77

12 r The staff seemed more interested in testing what I hadmemorised than what I had understood.

AAS 0.76

19 r Too many staff asked me questions just about facts. AAS 0.71

r = a reversed item n = not used in subsequent analysesGTS = Good Teaching Scale GSS = Generic Skills Scale CGS = Clear Goals and Standards ScaleAWS = Appropriate Workload Scale AAS = Appropriate Assessment Scale

Page 47: Course Experience Questionnaire 2000

Properties of the CEQ

37

Factor analysis investigates the pattern of correlations between item responses and seeks toestablish the structure of underlying factors that could explain the patterns of variation initems. Factor loadings are the correlations between the item score and the underlying factor.

The CEQ scale column of Table 5.2 shows the group or scale to which the item was assignedbased on the results of factor analyses of the CEQ 2000 survey data. Five factors had eigenvalues greater than one and, in accordance with convention, factor loadings less than 0.30have been omitted. The five factors extracted accounted for 58 per cent of the variance initem responses. The variance explained, and the factor loadings closely matched, thecorresponding statistics in previous analyses.

The factor analyses confirmed previous findings that the items could be grouped into fivescales for discussion purposes. The Overall Satisfaction item (question 25) was kept separate.In its current version item 16 was designed to strengthen the Appropriate Assessment Scale.However, it is apparent from Table 5.2 that item 16 groups with the Good Teaching Scaleitems. It is excluded from the current CEQ analyses of scales. In summary analyses of datafrom the CEQ survey of 2000 replicated the exploratory factor analyses of previous years.

Confirmatory Factor Analysis

Confirmatory factor analysis7 was conducted in order to investigate the extent to whichmeasures based on the five clusters of items identified separately identifiable constructs.Confirmatory factor analysis (CFA) differs from exploratory factor analysis (EFA) in thatCFA leads to a single �identified� solution that can be tested for �goodness-of-fit� against thedata. A CFA thus tests a �measurement theory� against an available data set and assesses thegoodness of fit of the measurement theory to the observed data. In conducting these analysesthe procedures used by Long and Hillman (2000) were followed.

The first analysis that was carried out specified that each item was associated with one andonly one latent variable or factor. All other factor loadings were fixed to zero, as were allcorrelations among the �errors� of the items. All factors were allowed to be correlated. Avariety of �goodness-of-fit� measures8 are recorded in Table 5.3. The coefficients associatedwith each of the items are shown in Table 5.3. In addition to the five factor theoretical modeltwo other models were tested. Model 2 tested the separate one-factor models for each of thepresumed scales separately. In model 3 it was assumed that a single factor would best explainthe patterns in the data.

7 Dr Gerald Elsworth of the University of Melbourne conducted this part of the analysis. The confirmatory

factor analyses were carried out with the structural equation modelling (SEM) program LISREL.8 In analyses with large numbers of factors and items, and a moderately large sample such as this, chi-square is

regarded as an index that is excessively sensitive to lack of fit, however. A number of �comparative fit� (or�lack-of-fit) indices have accordingly been developed which, in various ways, compare the chi-square of thefitted model to that of a base line or �null� model. Hence, for the present analysis, the �comparative fit index(CFI) was 0.97 while the �root mean square residual� (RMSR) was 0.101 and the �root mean square error ofapproximation� (RMSEA) was 0.038. The first two indices can be thought of as proportional measures ofgoodness of fit (maximum 1.0) while the latter two can be thought of as measures of �lack of fit�. Values ofstandard goodness of fit indices of over 0.9 are frequently regarded as satisfactory as are values of RMSEA ofbelow 0.05. Hence there was a close fit of the model to the observed data.

Page 48: Course Experience Questionnaire 2000

The 2000 CEQ Report

38

Table 5.3 Confirmatory Factor Analyses, Bachelor Degree Graduates, CEQ 2000

No. CEQ Items Factor Sq. MultipleLoading Correlations

Model . . . (1) (2) (3) (1) (2) (3)Good Teaching Scale

3. The teaching staff of this course motivated me to do my best work. .78 .78 .74 .61 .61 .557. The staff put a lot of time into commenting on my work. .77 .81 .73 .60 .66 .5415. The staff made a real effort to understand difficulties I might be having . . . .73 .77 .72 ..54 .59 .5117. The teaching staff normally gave me helpful feedback on how I was going. .82 .85 .79 .67 .72 .6218. My lecturers were extremely good at explaining things. .76 .80 .73 .58 .64 .5420. The teaching staff worked hard to make their subjects interesting. .71 .75 .69 .50 .56 .48

Clear Goals and Standards Scale1. It was always easy to know the standard of work expected. .71 .75 .64 .50 .57 .416. I usually had a clear idea of . . . what was expected of me in this course. .75 .79 .69 .57 .62 .4713.* It was often hard to discover what was expected of me in this course. .67 .67 .59 .45 .45 .3524. The staff made it clear right from the start what they expected from students. .71 .69 .65 .50 .48 .42

Appropriate Workload Scale4.* The workload was too heavy. .63 .70 .47 .39 .49 .2214. I was generally given enough time to understand the things I had to learn. .70 .54 .61 .48 .29 .3721.* There was a lot of pressure on me to do well in this course. .54 .66 .28 .29 .44 .0823.* The sheer volume of work . . . couldn�t all be thoroughly comprehended .66 .73 .51 .44 .53 .26

Appropriate Assessment Scale8.* To do well in this course all you really needed was a good memory. .61 .69 .50 .38 .48 .2512.* The staff seemed more interested in testing what I had memorised . . . .79 .83 .64 .62 .69 .4119.* Too many staff asked me questions just about facts. .62 .66 .55 .38 .43 .30

Generic Skills Scale2. The course developed my problem-solving skills. .77 .80 .71 .59 .64 .505. The course sharpened my analytic skills. .76 .80 .70 .58 .64 .519. The course helped me develop my ability to work as a team member. .40 .43 .35 .16 .19 .1210. As a result of my course, I feel confident about tackling unfamiliar problems. .71 .76 .66 .50 .58 .4411. The course improved my skills in written communication. .61 .61 .60 .38 .37 .3622. My course helped me to develop the ability to plan my own work. .61 .66 .60 .38 .44 .36Notes1. Models: (1) is a first-order confirmatory factor analysis (CFA) model with five factors and each factor

loading on only one item and no additional correlations between the unexplained variance in the items. (2)considers five separate one-factor �congeneric� measurement models. (3) is a first-order CFA model with onefactor only and no additional correlations between the unexplained variance in the items..

2. All models are calculated from a polychoric correlation matrix and a corresponding asymptotic matrix withweighted least squares.

3. N of cases, 51,840.4. Fit Statistics: Model 1 � ChiSq=23,068.9 (d.f. 220); RMSEA=0.045; SRMR=0.076; AGFI=0.98; CFI=0.97.

Model 3 - ChiSq=45,867.1 (d.f. 230); RMSEA=0.062; SRMR=0.150; AGFI=0.96; CFI=0.95. Model 2 - Good Teaching Scale - ChiSq=2,777.7 (d.f. 9); RMSEA=0.077; SRMR=0.046; AGFI=0.98;

CFI=0.99. Clear Goals and Standards - ChiSq=33.8 (d.f. 2); RMSEA=0.018; SRMR=0.005; AGFI=1.00;CFI=1.00. Appropriate Workload Scale - ChiSq=667.7 (d.f. 2); RMSEA=0.080; SRMR=0.029; AGFI=0.99;CFI=0.99. Appropriate Assessment Scale � Saturated model, no fit statistics calculated. Generic Skills Scale� ChiSq=3828.5 (d.f. 9); RMSEA=0.090; SRMR=0.055; AGFI=0.98; CFI=0.99.

5. Correlations between the factors in Model 1: GT/G&S = .75; GT/AW = .50; GT/Ass = .56; GT/GSk = .63;G&S/AW = .56; G&S/Ass = .47; G&S/GSk = .57; AW/Ass = .45; AW/GSk = .31; Ass/GSk = .50.

Page 49: Course Experience Questionnaire 2000

Properties of the CEQ

39

Table 5.3 presents the results of three sets of confirmatory factor analyses of responses to theCEQ by graduates of bachelor-level courses. The first model is the theoretical set of fivefactors with each item loading on only one factor. The second model is really a set of modelsconsisting of separate analyses -- first the six items of the Good Teaching Scale predicted byone latent variable, then the four items of the Clear Goals and Standards Scale, and so on.The third model assumed that all items reflected a single underlying trait perhaps �satisfactionwith course�. Values for two statistics are presented for each item for each model. The factorloading reflects the effect the underlying dimension has on responses to the item and thesquared multiple correlations show the extent to which the model explains variance in theitem. A number of goodness-of-fit statistics are presented for each model in the notes.

In the first model the factor loadings are generally fairly high. As noted by Long and HillmanItem 9, The course helped me develop my ability as a team member has a lower factor loadingthan desirable. Item 21 from the Appropriate Workload Scale (There was a lot of pressure onme to do well in this course) also had a low factor loading. The squared multiple correlationsare similarly overall fairly high, except for these items. The measures of fit that are unaffectedby sample size show very good levels of fit. It can be concluded that the CEQ model fits thepattern of responses to the items well. Moreover the results are similar to those reported byLong and Hillman (2000) for the previous year�s CEQ data. In terms of the structure of eachseparate scale the results (model 2) show acceptable factor loadings and squared multiplecorrelations for most items.

The third model tests the result of assuming a single factor (perhaps overall satisfaction) couldexplain the variation in CEQ responses. As would be expected, the fit of most items is worsethan for either of the other two models. However, a single trait model does fit the responsesto the CEQ to a reasonably good extent.

Possible Modifications to the CEQ Scales

It was possible to investigate whether changes to the model would improve its fit. Such aninvestigation allows particular items to load on more than one scale. Two items from theAppropriate Workload Scale were linked to other scales: I was generally given enough time tounderstand the things I had to learn and there was a lot of pressure on me to do well in thiscourse (reverse scored). The first of these was associated with the Good Teaching Scale andthe second was associated with the Generic Skills Scale. These patterns were also shown inthe analysis of the CEQ 1999 data.

In addition, large associations were found between several Generic Skills indicators: Thecourse developed my problem-solving skills with The course sharpened my analytic skills andThe course helped me develop my ability to work as a team member with As a result of mycourse, I feel confident about tackling unfamiliar problems. The resulting correlations wereboth positive (0.15, 0.13) suggesting that these two pairs of items share something in commonover and above their associations with the Generic Skills Scale. As noted in the previousreport, the similar wording of items 2 and 5 may be associated with this lack of independence,as might the proximity of items 9 and 10 in the questionnaire.

Page 50: Course Experience Questionnaire 2000

The 2000 CEQ Report

40

An Alternative Approach

An alternative approach based on item response theory (specifically a generalised form of thepartial credit model)9 was conducted. The computer program Quest (Adams and Koo, 1993)was used to carry out an alternative analysis of each of the five CEQ Scales. Quest assessesthe fit of graduates� responses to a slightly generalised form of Masters (1982) partial creditmodel. With the possible exception of item 9 in the Generic Skills Scale the Quest analysesconfirm the composition of the five scales identified by the factor analyses. Detailed resultshave been reported in the Interim Report for CEQ 2000 (Ainley & Johnson, 2001).

Summary

Analysis of data from CEQ 2000 confirms the conclusions from previous analyses. The scaleshave a satisfactory reliability and the structure of the measurement model in the CEQ fits thepattern of responses in the survey data. The results also suggest that it may be possible toidentify a dimension concerned with general course satisfaction that influences many of theseparate scale scores. That suggests that there may be common element of general satisfactionunderpinning graduate responses. However, for use in exploring features of good practice soas to improve the quality of courses it is probably more fruitful to use scores related to theseparate scales.

9 Using the program Quest developed by Adams and Koo (1994).

Page 51: Course Experience Questionnaire 2000

41

References

Ainley, J. and Long, M. (1994). The Course Experience Survey 1992 Graduates. Canberra:AGPS.

Ainley, J. & Johnson, T. (2001). Course Experience Questionnaire 2000: An Interim Report.Melbourne: GCCA.

AVCC (1995). The AVCC Code of Practice. Canberra: Australian Vice-Chancellors�Committee.

Eley, M.G. and Thomson, M. (1993). A System for Student Evaluation and Teaching.Canberra: AGPS.

Entwistle, N.J. and Ramsden, P. (1983). Understanding student learning. London: CroomHelm.

Guthrie, B. and Johnson, T.G. (1997). Study of Non-Response to the 1996 GraduateDestination Survey. Canberra: AGPS.

Bruck, D., Hallett, R., Hood, B., MacDonald, I & Moore, S. (2001). Enhancing studentsatisfaction in higher education: The creation of staff teaching communities. AustralianEducational Researcher, 28 (2), 79-98.

Linke, R. (1991). Performance Indicators in Higher Education, Vols 1 and 2. Canberra:AGPS.

Long, M. & Hillman, K. (2000). 1999 Course Experience Questionnaire. Melbourne: GCCA>

Long, M. and Johnson, T.G. (1997). Influences on the Course Experience QuestionnaireScales. Canberra: AGPS.

Marsh, H.W. (1987). Students� evaluations of university teaching: Research findings,methodological issues, and directions for future research. International Journal ofEducational Research, 11, 253-288.

Marsh, H.W and Hocevar, D. (1991). Students� evaluations of teaching effectiveness: thestability of mean ratings of the same teachers over a 13-year period. Teaching andTeacher Education, 7 (4), 303-314.

Marsh, H.W. and Overall, J.U. (1981). The relative influence of course level, course type, andinstructor on students� evaluations of university teaching. American EducationalResearch Journal, 18, 103-112.

Marsh, H.W and Roche, L.A. (1994). The use of students� evaluations of university teachingto improve teaching effectiveness. Final Project Report. Canberra: AGPS.

Ramsden, P. (1991a). Report on the Course Experience Questionnaire trial. In R. Linke (Ed),Performance Indicators on Higher Education, Vol 2. Canberra: AGPS.

Ramsden, P. (1991b). A performance indicator of teaching quality in higher education: TheCourse Experience Questionnaire. Studies in Higher Education (UK),16, 2, 129-150.

Page 52: Course Experience Questionnaire 2000

The 2000 CEQ Report

42

Ramsden, P. and Entwistle, N.J. (1981). Effects of academic departments on students�approaches to studying. The British Journal of Educational Psychology, 51, 368-383.

Ramsden, P., Martin, E. and Bowden, J. (1989). Social environment and sixth form pupils�approaches to learning. The British Journal of Educational Psychology, 59, 2, 129-142.

Sheridan, B. (1995). The Course Experience Questionnaire as a measure for evaluatingcourses in higher education. Perth: Edith Cowan University, Measurement, Assessmentand Evaluation Laboratory.

Trigwell, K. & Prosser, M. (1991) Improving the quality of student learning: the influence oflearning context and student approaches to learning on learning outcomes. HigherEducation, 22, 251 � 266.

Wilson, K., Lizzio, A. & Ramsden, P. (1997). The development, validation and application ofthe Course Experience Questionnaire. Studies in Higher Education, 22, 1, 33-53.

Page 53: Course Experience Questionnaire 2000

Appendix C

43

Appendix A: The Course Experience Questionnaire

Page 54: Course Experience Questionnaire 2000

COURSE EXPERIENCE QUESTIONNAIREThe purpose of these questions is to collect graduates’ perceptions of theircourses. Please complete the following questions on the basis of your mostrecent course of study (as listed in Question 1 of the attached Graduate Desti-nation Survey).

If you have completed a single major only (for example, medicine, engineering,architecture, pharmacy, education, law, physiotherapy), please use the left handcolumn of numbers only (headed “Major 1”). Name the major to which yourresponses apply (please write on the dotted line) and circle your responses tothe statements below.

If you completed a double degree (such as arts/law or commerce/law) or adouble major (for example, English and history, computer science and math-ematics, psychology and sociology, biology and zoology), please use bothcolumns of numbers. Name one major at the top of each column (pleasewrite on the dotted line) and circle your relevant response to the statementsbelow.

1. It was always easy to know the standard of work expected .............................................................

2. The course developed my problem-solving skills ............................................................................

3. The teaching staff of this course motivated me to do my best work ................................................

4. The work-load was too heavy .........................................................................................................

5. The course sharpened my analytic skills ........................................................................................

6. I usually had a clear idea of where I was going and what was expected of me in this course ..........

7. The staff put a lot of time into commenting on my work ..................................................................

8. To do well in this course all you really needed was a good memory ................................................

9. The course helped me develop my ability to work as a team member .............................................

10. As a result of my course, I feel confident about tackling unfamiliar problems ................................

11. The course improved my skills in written communication ..............................................................

12. The staff seemed more interested in testing what I had memorised than what I had understood ..

13. It was often hard to discover what was expected of me in this course ...........................................

14. I was generally given enough time to understand the things I had to learn ....................................

15. The staff made a real effort to understand difficulties I might be having with my work ...................

16. The assessment methods employed in this course required an in-depth understanding

of the course content .................................................................................................................

17. The teaching staff normally gave me helpful feedback on how I was going ...................................

18. My lecturers were extremely good at explaining things ..................................................................

19. Too many staff asked me questions just about facts ....................................................................

20. The teaching staff worked hard to make their subjects interesting ................................................

21. There was a lot of pressure on me as a student in this course .....................................................

22. My course helped me to develop the ability to plan my own work ..................................................

23. The sheer volume of work to be got through in this course meant that it couldn’t

all be thoroughly comprehended .................................................................................................

24. The staff made it clear right from the start what they expected from students ...............................

25. Overall, I was satisfied with the quality of this course ...................................................................

Major 1 [please write on dotted

line below]

...........................

Str

ongl

y di

sagr

ee

Stro

ngly

agr

ee

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

Major 2 [please write on dotted

line below]

...........................

CEQ major 1 code

[ ][ ][ ][ ][ ][ ]office use, 147-152

CEQ major 2 code

[ ][ ][ ][ ][ ][ ]office use, 178-183

Str

ongl

y di

sagr

ee

Stro

ngly

agr

ee

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

What were the best aspects of your course? Please write below:

What aspects of your course are most in need of improvement? Please write below:

..........................................................................................................................................................................................................................................

..........................................................................................................................................................................................................................................

..........................................................................................................................................................................................................................................

..........................................................................................................................................................................................................................................

Optional course/faculty variable [ ][ ][ ][ ][ ][ ][ ][ ][ ][ ] - Other variables: ...............................................................................Office use only

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167 198

190

184

185

186

187

188

189

191

192

193

194

195

196

197

168

169

170

171

172

173

174

175

176

177

199

200

201

202

203

204

205

206

207

208

209-218

Page 55: Course Experience Questionnaire 2000

Appendix C

45

Appendix B: The AVCC Code of Practice

Page 56: Course Experience Questionnaire 2000

AV-CC

Australian Vice-Chancellors' Committee(ACN 008 502 930)

and

Graduate Careers Council of Australia(ACN 008 615 012)

CODE OF PRACTICE

for the public disclosure of data from the

Graduate Careers Council of Australia’s

Graduate Destination Survey,Course Experience Questionnaire

andPostgraduate Research Experience Questionnaire

January 2001Canberra

Page 57: Course Experience Questionnaire 2000

2

AVCC-GCCA

CODE OF PRACTICEfor the public disclosure of data from theGraduate Careers Council of Australia’s

Graduate Destination Survey,Course Experience Questionnaire and

Postgraduate Research Experience Questionnaire

Policy Statement

One of the primary functions of the Graduate Careers Council of Australia’s GraduateDestination Survey (GDS), Course Experience Questionnaire (CEQ) and PostgraduateResearch Experience Questionnaire (PREQ) is to provide feedback to institutions, which, inconjunction with other indicators, may assist planning and the development of qualityimprovement initiatives. The GDS, CEQ and PREQ are also important in providinginformation for current and prospective students, to university careers services and to others inthe education field (including the Department of Education, Training and Youth Affairs).

The Australian Vice-Chancellors' Committee (AVCC) supports the public disclosure ofinstitutional data derived from the GDS and CEQ under the conditions and guidelines specifiedin this document. Support for the public disclosure of institutional data derived from the PREQis contingent on further successful development of the instrument.

General Conditions

• The use of GDS, CEQ and PREQ data in public statements, advertisements or promotionalactivities should be only for the purpose of assisting the public to develop informedjudgements, opinions and choices.

• It follows that the data should not be used in false, deceptive or misleading ways, eitherbecause of what is stated, conveyed or suggested, or because of what is omitted.

• Institutions are at liberty to make whatever declarations they feel are appropriate abouttheir own statistical data, provided disclosure accords with the principles above and theguidelines on the interpretation of survey data contained in this Code of Practice.

• Institutions and non-institutional users of survey data must not utilise GDS, CEQ or PREQdata to knowingly undermine the reputation and standing of institutions.

• The use of, or referral to institutions' data beyond that which is in the public domainrequires the prior consent of the institution(s), and prior consultation to ensure accuracy.

Page 58: Course Experience Questionnaire 2000

3

• Public comment on the GDS, CEQ and PREQ data must be supported by appropriateinterpretation of the data, with any necessary qualifications (e.g. cell size, response rate,special local issues) to be spelled out explicitly.

Optimal Use of Survey Data

Although some institutions and users of the GCCA data may wish to compare survey dataacross institutions, or to compare individual institution data against national means, suchcomparisons should only be made after taking into consideration the following guidelines andqualifications concerning the appropriate use of the data and its interpretation.

If comparison between GDS results is made, the most effective level is between like fields ofstudy amongst institutions with similar survey response rates, with like student demographics,and in like labour markets. Even in this context it has to be appreciated that the differingmissions of institutions can result in a situation where judgements about, for example, differingpercentages of graduates moving into postgraduate study rather than employment, can beinvalid. As such, users are advised that in many cases it is inappropriate to make inter-institutional comparisons. The greatest value of the GDS data is likely to be derived when thedata are considered over a period of years. The GCCA now has a time-series of over twenty-five years of GDS data.

The most effective CEQ and PREQ comparisons are within an institution, for the same field ofstudy, across several years. Where comparisons are to be made across institutions, the optimaluse of the CEQ and PREQ data is in evaluating an institution’s courses against comparablecourses elsewhere to identify best practice. It is therefore critical that comparisons are madebetween like courses, in like institutions with similar survey response rates.

Release of Data

Release of data will be at the discretion of GCCA based on advice from the Survey ReferenceGroup.

In principle, GDS and CEQ datasets will be released for bona fide purposes either directly orvia the Social Sciences Data Archive (SSDA) in Canberra. Requests for data will need to beaccompanied by documentation describing the aims of the research, and users will be requiredto lodge a copy of any published results with the GCCA.

In instances where institution-level data has not previously been published by GCCA, users ofsurvey data will not be permitted to name institutions in their analyses without the prior consentof the institutions themselves.

Fees (if any) for non-commercial access to data or related documentation will be limited tocost-recovery. Fees for commercial applications of data will be determined at the discretion ofGCCA.

Users of survey data, including institutions, are not to pass on the raw national survey data toany third party, other than to those within their own organisation.

Page 59: Course Experience Questionnaire 2000

4

Research Ethics

The GDS is conducted within the ethical guidelines laid out in the National Statement onEthical Guidelines in Research Involving Humans1.

The rights of the respondent must be respected. In terms of use of data, information should notbe used in a manner which identifies individual subjects.

Guidelines for the Interpretation of Survey Data

Some specific points to be taken into consideration when interpreting the survey data areoutlined below.

• The GDS, CEQ and PREQ data are not suitable for making simplistic (i.e. unqualified)inter-institutional comparisons. Institutions can have vastly different histories, missions,geographic/socio-economic situations, enrolment profiles (including high percentages ofmature-aged, part-time or pre-employed graduates) and course mixes. If comparisons aremade across apparently comparable institutions, care should be exercised. Aggregationsbeyond the field of study level (for example, to total university level) need to be interpretedwith caution.

• A total institutional response rate of at least 70% is desirable and achievable for the GDSand the CEQ. Any data which are disclosed publicly should be accompanied by informationon the number surveyed and the response rate2. Any GDS or CEQ survey data with anoverall institutional response rate below 50% should not be disclosed publicly.

• While individual institutions can generally calculate response rates for the majority of theirindividual fields of study, for reasons including the incidence of graduates with doublemajors and/or double degrees, it is not currently possible for GCCA to provide accurateresponse rate data by individual fields of study. If comparisons are made across institutionsat the field of study level, caution should be exercised because of variations in responserates across institutions.

• Due to the variation in cell sizes in particular fields of study, it is not possible to beprescriptive about the interpretation of data where very small numbers of graduates areinvolved3. Caution should be exercised when drawing conclusions from small cells with lowresponse rates, and this point should be noted clearly in any reference to such data. CEQand PREQ data are even more sensitive to cell size than GDS data.

• In interpreting survey data, it is recognised that due to the timing of the surveys, there canbe different results for different fields of study, in terms of their graduates' likelihood ofbeing in employment, further study, various modes of compulsory postgraduatetraining/internship, meeting requirements for professional registration, and so on.

• The GDS is a 'snapshot' survey, producing information on the proportion of graduatesinvolved in various activities, including those in full-time employment, those looking forfull- or part-time employment, those going on to further full-time study (including honoursyear students), and those not actively seeking work, on 30 April (or 31 October for mid-year

1 National Health and Medical Research Council, 1999, National Statement on Ethical Guidelines in ResearchInvolving Humans, AGPS.2 The PREQ remains in a development phase during which aggregated national data will be published andinstitutional response rates will be monitored to determine desirable levels.3 The GCCA urges caution when dealing with cell sizes smaller than ten.

Page 60: Course Experience Questionnaire 2000

5

completers). It is a pertinent source of information regarding the employment experience ofgraduates but is subject to many influences and should be used as an indicator only. Manygraduates classified as 'seeking full-time work' may be waiting for work appropriate to thelevel of their qualifications, rather than accepting other, less challenging employment. Itdoes not, for instance, examine labour market experience in the period since completion ofrequirements for a degree, and therefore cannot indicate how long a graduate has beenlooking for work.

• Differences in CEQ scores which can be considered worthy of note are those that exceedone-third of the relevant standard deviation.

• Disparities in GDS outcomes by sex are often explained by differences in the mix ofenrolments for males and females (e.g. about 10% of engineering graduates are females,compared with their being about 70% of humanities graduates). This does not, however,explain some disparities in starting salaries within fields of study and indicates the need forconsidered analysis of sex-based differences in GDS survey outcomes.

• Requirements for professional registration that affect employment practice and startingsalaries immediately following graduation mean that it is difficult to devise meaningfulnational GDS statistics for some fields of study. In the case of law, for example, somegraduates attend a postgraduate legal institution for practical professional training, whilstothers are employed as articled clerks. Architecture and pharmacy graduates have toundertake a year of supervised employment before registration, which affects starting salarylevels more than employment status.

For further information:

Executive Director, Australian Vice-Chancellors' Committee (AVCC)GPO Box 1142, Canberra, ACT 2601Tel: (02) 6285 8200

Executive Director, Graduate Careers Council of AustraliaPO Box 28, Parkville, Vic, 3052Tel: (03) 8344 9333

Page 61: Course Experience Questionnaire 2000

Appendix C

51

Appendix C: Response Rates of Institutions Participatingin GDS 2000

GDS CEQInstitution Npopln Nresps RR (%) Nresps RR (%)

Australian Catholic University 2725 1874 68.8 1874 68.8Australian Maritime College 149 84 56.4 81 54.4Australian National University 2281 1299 57.4 1296 56.8Avondale College 189 140 74.1 136 72.0Bond University 746 411 55.1 411 55.1Central Queensland University 2353 883 37.5 883 37.5Charles Sturt University 5421 3096 57.1 3096 57.1Curtin University of Technology 4705 2366 50.3 2339 49.7Deakin University 5816 3307 56.9 3307 56.9Edith Cowan University 4531 2653 58.6 2516 55.5Flinders University of South Australia 2817 1839 65.3 1839 65.3Griffith University 5408 3623 67.0 3623 67.0James Cook University 1493 750 50.2 747 50.0La Trobe University 5878 4185 71.2 4185 71.2Macquarie University 4265 2337 54.8 2337 54.8Marcus Oldham College 41 27 65.9 25 61.0Monash University 7223 4245 58.8 4241 58.7Murdoch University 2213 1284 58.0 1283 58.0Northern Territory University 823 335 40.7 328 39.9Notre Dame 234 66 28.2 65 27.8Queensland University of Technology 7869 5679 72.2 5679 72.2RMIT 6371 3671 57.6 3655 57.4Southern Cross University 2032 1125 55.4 1125 55.4Swinburne University of Technology 2435 1277 52.4 1275 52.4University of Adelaide 3383 1914 56.6 1873 55.4University of Ballarat 1344 801 59.6 791 58.9University of Canberra 2131 1082 50.8 1067 50.1University of Melbourne 7963 4550 57.1 4534 56.9University of New England 3131 2184 69.8 2183 69.7University of New South Wales 7635 4251 55.7 4249 55.7University of Newcastle 3340 2387 71.5 2360 70.7University of Queensland 6564 3345 51.0 3344 50.9University of South Australia 5363 3199 59.6 3182 59.3University of Southern Queensland 2862 1686 58.9 1686 58.9University of Sydney 8247 4346 52.7 4241 51.4University of Tasmania 2710 1582 58.4 1573 58.0University of Technology, Sydney 6628 3607 54.4 3591 54.2University of the Sunshine Coast 282 202 71.6 202 71.6University of Western Australia 3463 2023 58.4 2014 58.2University of Western Sydney 7252 3739 51.6 3701 51.0University of Wollongong 2740 1516 55.3 1516 55.3Victoria University of Technology 3217 1615 50.2 1603 49.8

Total 156273 90585 58.0 90056 57.6

Note: Response rate calculations are based on the number of survey forms returned. Nvalid = the number ofsurvey forms containing sufficient GDS background information to process. A return to the CEQ isdefined as a graduate who has a valid score for at least one of the CEQ scales or the Overall Satisfactionitem for either the first or second course on the questionnaire.

Page 62: Course Experience Questionnaire 2000

The 2000 CEQ Report

52

Appendix D: Comparison of Characteristics of CEQ 2000Respondents and the Population of Bachelor DegreeGraduates from 1999

Percentage Distribution

CEQ 2000 Respondents Bachelor Degree Graduates 1999

Sex

Male 37.5 41.6

Female 62.5 58.4

Age

24 and younger 62.6 68.8

25-29 years 14.8 13.3

30-39 years 12.5 10.7

40 and older 10.1 7.2

Field of Study

Agriculture 1.4 1.1

Architecture 2.0 2.3

Humanities 26.8 25.0

Business 25.1 27.1

Education 8.5 8.3

Engineering 4.9 6.1

Health 11.8 12.7

Law 3.9 3.8

Science 15.3 16.7

Vet Science 0.2 0.3

Residency

Permanent resident 91.6 84.2

Overseas resident 8.4 15.8

Level of Course

Bachelor honours 87.0 90.1

Bachelor pass 11.8 8.6

Undergraduate diploma 1.2 1.3

NotePopulation values are derived from DETYA (2001) Higher Education Statistics Collection. Canberra: DETYA.