The Development of an Extended Course Experience Questionnaire

Download The Development of an Extended Course Experience Questionnaire

Post on 27-Jan-2017




1 download

Embed Size (px)


<ul><li><p>This article was downloaded by: [Memorial University of Newfoundland]On: 18 July 2014, At: 02:37Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK</p><p>Quality in Higher EducationPublication details, including instructions for authors andsubscription information:</p><p>The Development of an ExtendedCourse Experience QuestionnairePATRICK GRIFFIN a , HAMISH COATES a , CRAIG MCINNIS a &amp; RICHARDJAMES aa The University of Melbourne, AustraliaPublished online: 03 Jun 2010.</p><p>To cite this article: PATRICK GRIFFIN , HAMISH COATES , CRAIG MCINNIS &amp; RICHARD JAMES (2003)The Development of an Extended Course Experience Questionnaire, Quality in Higher Education, 9:3,259-266, DOI: 10.1080/135383203200015111</p><p>To link to this article:</p><p>PLEASE SCROLL DOWN FOR ARTICLE</p><p>Taylor &amp; Francis makes every effort to ensure the accuracy of all the information (theContent) contained in the publications on our platform. However, Taylor &amp; Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor &amp; Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.</p><p>This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &amp;Conditions of access and use can be found at</p><p></p></li><li><p>Quality in Higher Education, Vol. 9, No. 3, November 2003</p><p>The Development of an ExtendedCourse Experience QuestionnairePATRICK GRIFFIN, HAMISH COATES, CRAIG MCINNIS &amp;RICHARD JAMESThe University of Melbourne, Australia</p><p>ABSTRACT Universities in Australia have been using the course experience questionnaire (CEQ)for a number of years to measure the quality of teaching. Recently, there have been increasing callsfor a broader perspective that takes into account the potential of non-classroom context influenceson the student learning experience. This project undertook to develop a broader instrument thatadded a range of scales that could be linked to the existing instrument. A sample of almost 4000students responded to the trial instrument. The trials suggest that the existing good teaching,generic skills, clear goals, appropriate workload and appropriate assessment scales can besupplemented by the following additional scales: student support, learning resources, learningcommunity, graduate qualities and intellectual motivation. These new scales were shownthrough Rasch analyses to be psychometrically reliable and accurate.</p><p>The Development of an Extended Course Experience Questionnaire</p><p>There has been a range of methods proposed for evaluating the quality of undergraduateprogrammes. Research by Marsh (1987) and Ramsden (1991a) suggests that studentevaluations are among the most valid and reliable. Despite their shortcomings andpotential for misuse, surveys of student perceptions now play a significant role in thehigher education sector. An instance of this is the use of graduate evaluation data forcomparative purposes in the Good Universities Guide (Ashenden &amp; Milligan, 2000). Giventhe increasing and widespread use of student evaluations, there is a need to monitor andevaluate their effectiveness as tools in the quality management process and, wherepossible, take action to ensure they provide valid and reliable information about institu-tional performance.</p><p>Since 1993 the course experience questionnaire (CEQ) has been mailed to every personcompleting an undergraduate qualification in Australia in the year following completion oftheir course. The CEQ is intended to probe key elements of the university learning processand, in doing so, obtain data on the quality of teaching and courses. Rather than measuringthe multitude of factors that combine to form student experience, the CEQ was developedwith an assumption of a strong association between the quality of student learning andstudent perceptions of teaching. The items and scales are specifically tuned to obtaininformation on what were considered by Ramsden (1991a, 1991b) to be the definingelements of teaching and its organisation. By considering the extent to which instructionencourages deep, rather than surface, understanding of concepts and materials, the CEQattempts to provide a domain-neutral indicator of university course quality (McInnis,1997).</p><p>The CEQ was developed for two principal reasons. First, the instrument was intended</p><p>ISSN 13538322 print; 14701081 online/03/03025908 2003 Taylor &amp; Francis LtdDOI: 10.1080/135383203200015111</p><p>Dow</p><p>nloa</p><p>ded </p><p>by [</p><p>Mem</p><p>oria</p><p>l Uni</p><p>vers</p><p>ity o</p><p>f N</p><p>ewfo</p><p>undl</p><p>and]</p><p> at 0</p><p>2:37</p><p> 18 </p><p>July</p><p> 201</p><p>4 </p></li><li><p>260 P. Griffin et al.</p><p>to facilitate quality assurance and accountability. The primary and motivating purpose ofthe instrument is to use student ratings to derive performance indicators of teachingeffectiveness in higher education institutions. As a direct measure of consumer satisfactionwith higher education, the CEQ was developed to produce as economically as poss-ible ordinal ranking of academic organisational units in different institutions (Ramsden,1991a, p. 130; 1991b). Second, by internal and comparative evaluation of results, the CEQoutputs are intended to assist institutions with their quality enhancement and improve-ment processes. Performance indicators have the potential to drive performance, as well asmeasure it. Analysis of information from ex-students initiates a feedback loop capable ofproviding valuable information to institutions on the benefits and constraints of particularcourses. A market-driven perspective further suggests that comparative evaluation willpromote competition and lead to institutional improvements. Although distribution ofsensitive performance data may seem to contradict internal commercial objectives, in anincreasingly competitive national system there may be mutual benefits if benchmarkingactivities can objectively reveal areas of weakness and point to mechanisms for improve-ment. Co-operation and collaboration may motivate more effective competition and conse-quent quality improvement. A third, largely ex-post facto series of functions of theinstrument are more student-oriented. In addition to the standardised quantitative resultsof the evaluation process being used by institutions for marketing purposes, they have alsobeen incorporated into more commercial texts such as the Good Universities Guide (Ashen-den &amp; Milligan, 2000). Additionally, and importantly, the survey process provides gradu-ates with a framework and formal means of reflecting on their courses.</p><p>It is clear, given these purposes, that the CEQ must provide information that is bothnationally generalisable and locally sensitive. The instrument needs to possess qualitiesthat enable national benchmarking across fields of study, are sufficiently detailed tofacilitate accurate management of each institutions quality-improvement processes, andprovide students with a response framework relevant to their course. The items and scalesneed to mediate between abstraction and sensitivity and, without fostering an institutionalrelativism, the instrument needs to provide data that adequately represents the lifeworldof each institution. If quality assurance or management processes are built upon unrepre-sentative information, innovation and diversity could be hindered. As McInnis (1997, p. 65)argues, universities are inevitably tempted to comply with, or at least drift towards theparameters set by performance indicators.</p><p>The original CEQ was based on a theory of learning that emphasises the primary forcesin the undergraduate experience as located within the classroom setting. However, concen-trating analysis on what happens in the classroom fails to account for a significant part ofthe undergraduate experience. As delivery modes expand and universities increasinglysearch for improved ways of providing a quality higher education experience, an instru-ment limited to classroom interactions is increasingly inadequate. If the standard instru-ment for course assessment is not able to truly measure the student experience acrossdiverse settings, it may be a potential impediment to innovation and distinctiveness.</p><p>Two alternatives for modifying the CEQ provide a possible means of addressing thisconstraint. The first involves emphasising the broad quality assurance function of theexisting instrument, but allows institutions to use or add their own more context-specificinstruments to gather institutionally unique quality improvement information. Splittingthe quality management process in this way may have a number of limitations. Forinstance, both the power and generality of the current instrument could be underminedand the lack of broader contextualisation of the institutionally-specific additional itemsmay distort findings. Promoting the use of institutionally-based instruments also magnifies</p><p>Dow</p><p>nloa</p><p>ded </p><p>by [</p><p>Mem</p><p>oria</p><p>l Uni</p><p>vers</p><p>ity o</p><p>f N</p><p>ewfo</p><p>undl</p><p>and]</p><p> at 0</p><p>2:37</p><p> 18 </p><p>July</p><p> 201</p><p>4 </p></li><li><p>Extended Course Experience Questionnaire 261</p><p>the threat of relativism because innovative or distinctive approaches in teaching may notbe readily measurable against conventional criteria. Further, it may encourage a limitedunderstanding of the salient characteristics of the undergraduate experience and promotea narrowing of focus for policy and planning.</p><p>A second alternative for change guided the current extension process and involvedextending the CEQ to address a more holistic range of factors in the contemporaryundergraduate experience. This perspective recognised that students learning experiencesare embedded in a number of contextual situations that provide a dynamic web ofinfluences, implying that the quality and outcomes of undergraduate study are composedof more than the quality of teaching alone (Pascarella, 1991, p. 458). It is supported by theassumption that a significant part of university experiences and learning outcomes areaffected by what happens outside the classroom. A large body of research indicates thatstudents out-of-class experiences may have as much influence on the development ofhigher-order cognitive skills as do their more formal, classroom-based instructional experi-ences (Pascarella &amp; Terenzini, 1991, 1998). Researchers such as Astin (1993), Pace (1979),Pascarella (1985), Terenzini and Wright (1987) and Tinto (1987) stress that engagement isfostered by a supportive learning environment. McInnis (1997) and McInnis and James(1995) also showed that these findings were relevant to the contemporary Australiancontext.</p><p>It became clear that the existing CEQ instrument needed to be extended to incorporatethe social, technological, interpersonal and resource aspects of the university undergrad-uate experience. If this were to be achieved, then universities would have a broader rangeof items and sub-scales with which to reflect how input characteristics affect the under-graduate experience. With this motive, an exercise to extend the current CEQ wasundertaken. Psychometrically modelled to resemble the original CEQ in a number of keyrespects, the new instrument was intended to connect current trends in educational theorywith an understanding of the relevant aspects of current undergraduates experiences, andto provide stable and reliable measurement of constructs not measured by the currentinstrument. A number of areas salient in the contemporary undergraduate experience notprobed by the current CEQ were identified. These included:</p><p>learning climate and intellectual environment;social dimensions of learning;provision and utilisation of resources to encourage and support independentlearning;guidance and support for students to encourage increasing academic indepen-dence;analysis of higher-order graduate outcomes beyond generic skills;enhancement of graduates intellectual stimulus and challenge;recognition of the growing importance of information technology;acknowledgement of the international context of learning.</p><p>Instrument Development Methodology</p><p>Extending the CEQ meant that a new version of the questionnaire was needed. Like itspredecessor, the new instrument was required to be accurate, reliable and to have knowntolerance or error limits. Wright and Masters (1982) defined the characteristics thatmeasurement should have. They listed four requirements as direction, order, magnitudeand replicable units. Each sub-scale must measure a specifically defined trait or domain of</p><p>Dow</p><p>nloa</p><p>ded </p><p>by [</p><p>Mem</p><p>oria</p><p>l Uni</p><p>vers</p><p>ity o</p><p>f N</p><p>ewfo</p><p>undl</p><p>and]</p><p> at 0</p><p>2:37</p><p> 18 </p><p>July</p><p> 201</p><p>4 </p></li><li><p>262 P. Griffin et al.</p><p>interest. Just as the initial CEQ focused on specific areas of the learning experience of theundergraduate, the extended instrument would be required to measure additional do-mains or traits and do so with at least the same level of accuracy. Estimating thesecharacteristics is called calibration. As Thurstone (1927) demanded, the trait being mea-sured should not affect the measurement instrument and the measurement instrumentshould not affect the trait being measured. Furthermore, the measure obtained of the traitshould not be affected by which instrument is used, given that we know the error andaccuracy levels of the instrument. Any one of a set of equivalent instruments should givea measure of the trait consistent with measures obtained with any other equivalentinstrument. If a scale is affected by the people who use it, or who are assessed by it, itsvalidity is threatened. Within the range of objects for which the measuring instrument isintended, its function must be independent of the object of measurement (Thorndike,1971). This property is known as specific objectivity. For specific objectivity to hold, ameasurement must be independent of the measuring instrument and the instrument mustfunction independently of the traits measured.</p><p>Developing the Scales</p><p>Developing the items and the sub-scales involved several stages. The first stage involveda series of stakeholder meetings, student focus group...</p></li></ul>