Transcript
Page 1: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

This article was downloaded by: [Northeastern University]On: 15 November 2014, At: 20:48Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Assessment & Evaluation in HigherEducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/caeh20

Conceptualising and measuring studentengagement through the AustralasianSurvey of Student Engagement (AUSSE):a critiquePauline Hagel a , Rodney Carr a & Marcia Devlin ba Faculty of Business and Law , Deakin University , 221 BurwoodHighway, Burwood, Melbourne 3125 , Australiab Higher Education Research Group , Deakin University , 221Burwood Highway, Burwood, Melbourne 3125 , AustraliaPublished online: 25 Mar 2011.

To cite this article: Pauline Hagel , Rodney Carr & Marcia Devlin (2012) Conceptualisingand measuring student engagement through the Australasian Survey of Student Engagement(AUSSE): a critique, Assessment & Evaluation in Higher Education, 37:4, 475-486, DOI:10.1080/02602938.2010.545870

To link to this article: http://dx.doi.org/10.1080/02602938.2010.545870

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

Page 2: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 3: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

Assessment & Evaluation in Higher Education2011, 1–12, iFirst Article

ISSN 0260-2938 print/ISSN 1469-297X online© 2012 Taylor & Francis

http://www.tandfonline.com

Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

Pauline Hagela*, Rodney Carra and Marcia Devlinb

aFaculty of Business and Law, Deakin University, 221 Burwood Highway, Burwood, Melbourne 3125, Australia; bHigher Education Research Group, Deakin University, 221 Burwood Highway, Burwood, Melbourne 3125, AustraliaTaylor and Francis LtdCAEH_A_545870.sgm10.1080/02602938.2010.545870Assessment & Evaluation in Higher Education0260-2938 (print)/1469-297X (online)Article2011Taylor & [email protected]

Student engagement has rapidly developed a central place in the quality agenda ofAustralian universities since the introduction of the Australasian Survey ofStudent Engagement (AUSSE). The AUSSE is based on one developed in theUSA. The main arguments given for adopting this survey in Australia are that itprovides a valid instrument for measuring engagement and that it enablesinternational comparisons. However, the survey instrument and scales have beenadopted with little scrutiny of these arguments. This paper examines thesearguments by considering different perspectives of engagement, examining theimportance of contextual differences and evaluating the AUSSE engagementscales in the light of both. The paper concludes that the AUSSE results should beused by universities and policy-makers with caution.

Keywords: student engagement; AUSSE; NSSE; learner engagement; time ontask; post secondary education

Introduction

Student engagement has quickly developed a central place in the quality agenda ofAustralian universities since the introduction of the Australasian Survey of StudentEngagement (AUSSE) in 2007. Forty-five institutions in Australasia are expected toparticipate in the 2010 survey (ACER 2010). While the speed of its acceptance hasbeen remarkable, the AUSSE has pedigree. It is based on a survey developed over adecade ago in the USA: the North American Survey of Student Engagement (NSSE).

The AUSSE is one of several surveys used to assess student outcomes in Austra-lian higher education. The use and reporting of the AUSSE is not mandatory forAustralian universities. However, its use was recommended by the Australian govern-ment’s major review of higher education completed in 2008. The government subse-quently signalled its intention to instigate a survey to investigate the engagement andsatisfaction of first-year students and to include measures of student engagement infunding arrangements for Australian public universities (DEEWR 2009a). Conse-quently, Australian universities are participating in the AUSSE in the expectation thatthey will be held accountable by the government for their performance in relation tostudent engagement. The Australian Centre for Educational Research (ACER) haslead and promoted the adoption of the AUSSE to measure student engagement inAustralasia.

*Corresponding author. Email: [email protected]

Vol. 37, No. 4, June 2012, 475–486

http://dx.doi.org/10.1080/02602938.2010.545870

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 4: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

2 P. Hagel et al.

The main arguments given for using the AUSSE are that it provides a valid instru-ment for measuring engagement (Coates 2010) and that it enables internationalcomparisons (ACER 2008a). However, this paper contends that there are limitationsin the way the AUSSE conceptualises and measures engagement that warrants cautionin the use of its results. After providing a brief overview of the AUSSE instrument,the argument of the paper is developed in two main stages. First, the paper identifiesand discusses alternative conceptions of engagement and considers the implications ofdifferences between the US and Australian contexts for conceptualising and measur-ing student engagement. Second, this paper evaluates the AUSSE scales in relation toconceptions of engagement and contextual differences. The paper concludes byconsidering some implications for research and practice.

Overview of the AUSSE

The AUSSE surveys undergraduates to determine the extent to which they areinvolved ‘with activities and conditions likely to generate learning’ (Coates 2010, 3).It targets onshore, first- or final-year students who have not previously completed orbeen enrolled in a university degree (Coates 2010). The survey uses a questionnaire(the ‘Student Experience Questionnaire’) that asks students about how often and/orthe extent to which they have experienced certain activities and conditions while atuniversity. The AUSSE also includes some ‘outcome’ scales: higher-order thinking,general learning outcomes, general development outcomes, average overall grades,departure intentions and overall satisfaction. However, this paper is concerned onlywith the ‘engagement’ scales of the AUSSE. (A copy of the full questionnaire can befound in ACER 2010.)

There are 48 questions in the AUSSE that are grouped into six engagement scales:

● academic challenge;● active learning;● student and staff interaction;● enriching educational experiences;● supportive learning environments; and● work-integrated learning.

The first five of these scales were adopted from the NSSE with some minor changesin wording. The ‘work-integrated learning’ scale was developed by the ACER for theAUSSE. These scales are proposed as ‘benchmarks’ of effective educational practice(Kuh 2009). Combined the six scales provide a conception of student engagement butwhich theory of engagement does it reflect and whose conception is it?

Alternative conceptions of engagement

In discussing engagement in pre-tertiary education, Vibert and Shields (2003) high-light three different ideological perspectives. First, from the rational/technicalperspective the role of education is to prepare students for life after formal education.Therefore, student engagement is a matter of involving students in useful andproductive activities determined by educators and guided by government policy orsocietal expectations. From this perspective, engagement should be and is measurableusing ‘objective’ survey instruments such as the AUSSE. Second, the interpretive/student-centred perspective suggests that engagement means more than just dutiful,

476

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 5: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

Assessment & Evaluation in Higher Education 3

busy and compliant behaviour on behalf of students, rather students must haveautonomy, choice and control in order to be genuinely engaged. Finally, the critical/transformative perspective focuses on how students engage to transform themselvesand society. The critical perspective asks hard questions about engagement such as‘for what?’ and ‘for whom?’ (Vibert and Shields 2003).

Those adopting a critical transformative perspective interrogate the meaning ofengagement in higher education in relation to its opposite condition – alienation (e.g.Case 2008; Mann 2001). Students who comply and engage from a functionalperspective may, in fact, be alienated to the extent that they ‘play’ the assessmentgame, perform to standards set by others and/or feel forced to attend university in thefirst place due to societal expectations (Mann 2001) or labour market conditions.Students may also be alienated by credentialism and classroom pedagogy (London,Downey, and Mace 2007). From the critical perspective, failure to consider the ideo-logical basis of approaches to engagement may lead to impoverished conceptions ofengagement and measurement approaches that reflect the concerns of the dominantelite.

In addition to the ideological perspective, engagement can be conceptualised asoccurring in several dimensions, typically behavioural, emotional and cognitivedimensions (Fredricks, Blumenfeld, and Paris 2004). Bryson and Hand (2007) makea similar distinction between engagement that is active, relational or oriented towardslearning. A behavioural focus is concerned with the extent to which students exhibitpositive behaviour towards academic, social, institutional and extra-curricularactivities. The emotional perspective is concerned with the affective response andconnection of students to their teachers, peers and institutions. Finally, the cognitiveperspective is concerned with the mental investment that students make in learning(Fredrick, Blumenfeld, and Paris 2004). The value of conceiving engagement asmultidimensional is that it recognises that a student’s behaviour, affective responseand cognition are linked (Fredrick, Blumenfeld, and Paris 2004).

The conceptions of engagement discussed above are consistent with a definition ofstudent engagement as a persistent and general condition that characterises howstudents relate to their university experience. This view of engagement underliesmuch of the research into student engagement at university (Bryson and Hand 2007;Steele and Fullagar 2009). However, engagement may also be defined in terms of ashort-term absorption in a specific activity. This phenomenon has also been termed‘flow’ (Steele and Fullagar 2009).

Flow theory derives from positive psychology and the work of Csikzentmihalyi(1990). Csikszentmihalyi’s (1990) concept of ‘flow’ describes what happens whenpeople are involved in activities that command their total involvement, concentrationand absorption. Conditions that support flow include a balance between the challengeof the task and skills of the student, clear goals on the part of the student and individ-ual autonomy (Steele and Fullagar 2009). Consequently, flow theory is consistent witha student-centred perspective on engagement (Vibert and Shields 2003) and encom-passes its cognitive, behavioural and affective dimensions. Flow theory suggests analternative means of operationalising engagement. It suggests that engagement is notmerely the sum of the useful and productive activities that students experience.Rather, engagement derives from the quality, nature and depth of these activities.

In summary, student engagement is a complex, multifaceted construct (Fredrick,Blumenfeld, and Paris 2004). It is not value-free; it encompasses behavioural,emotional and cognitive elements. Further, engagement can be conceived as a deep,

477

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 6: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

4 P. Hagel et al.

flow experience or as involvement in a broad range of activities. The contested natureand complexity of the concept provide considerable challenges for researchers ininvestigating student engagement. Further challenges arise when conceptions ofstudent engagement are transplanted from one national context to another.

Origins of the AUSSE: whose theory of student engagement?

The NSSE was first piloted in 1999 in the USA but was based on much earlier ques-tionnaires that assessed various aspects of college experience (Kuh 2009). The NSSEwas developed as a counterpoint to traditional concepts of institutional quality basedon prestige, staff qualifications or academic selectivity (Astin 1985; Carini, Kuh, andKlein 2006; LaNasa, Cabrera, and Trangsrud 2009) and to measure the contributionthat colleges made to students (Kuh 2009). The NSSE scales were designed to focusattention on good teaching in undergraduate education (Kuh 2009) and did so byoperationalising ‘the seven principles’ that include student–faculty contact; coopera-tion among students; active learning; prompt feedback; time-on-task; high expecta-tions; and respect for diverse talents and ways of learning (Chickering and Gamson1999).

The college experience assessed through these early instruments and currently, theNSSE, is informed by particular conceptions of the role of undergraduate educationand its significance in students’ lives. Historically, the focus of college education hasbeen to ‘promote student growth and development of multiple and distinctive abilitiesand interest domains’ (Feldman, Smart, and Ethington 2004, 531). In addition tothe formal learning experience, ‘college life’ has been conceived as central to theexperience of going to college in the USA (Moffatt 1991).

While the USA has very diverse arrangements and institutions for post-secondaryeducation, the NSSE was designed originally to evaluate a traditional collegeexperience. This is evidenced by the fact that in 2009, the NSSE was administered inthe USA to eight of the 33 categories into which the US post-secondary institutionsare classified by the Carnegie Foundation for the Advancement of Teaching (i.e.the Basic2005 Classification). These eight categories include institutions that offerfour-year undergraduate baccalaureate programmes at institutions that range from‘research universities’ with high to very high research activity, ‘Master’s colleges anduniversities’ with ‘smaller’ to ‘larger’ programmes, and ‘baccalaureate colleges’ thatfocus on either ‘arts and sciences’ or ‘diverse fields’ (NSSE 2009).

A total of 617 institutions participated in the NSSE in 2008–2009 survey (NSSE2009). These 617 institutions comprised approximately 38% of all institutionsincluded in the eight Basic2005 Classifications referred to above. The CarnegieFoundation for the Advancement of Teaching (2010) reports demographic and otherdata about all post-secondary institutions in the USA. According to this source, in2010, the average enrolment of the colleges and universities in the above-mentionedeight classifications was 6200 students, while 40% of these had total enrolments ofonly 1700 students. Thirty-five per cent were in public control. Approximately 62%of the institutions had at least 25% of their students living in residences on campusand, at half of these residential colleges, at least 50% of students lived in residences(Carnegie Foundation for the Advancement of Teaching 2010).

As already noted, the NSSE instrument is not administered in all post-secondaryinstitutions in the USA. Fourteen of the Basic2005 Classification categories encom-pass the ‘community colleges’. These colleges use an adaptation of the NSSE – the

478

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 7: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

Assessment & Evaluation in Higher Education 5

Community College Survey of Student Engagement (CCSSE). The CCSSE containsmany of the existing NSSE items. However, it retains only three of the five NSSEengagement scales. Notably, the CCSSE includes items that relate to technicaleducation and student support and excludes items that assume on-campus residency(CCSSE 2010). That is, the CCSSE has been considerably adapted to reflect thedifferences between the community and four-year colleges in their missions, studentpopulations and resources (CCSSE 2010).

In contrast to the CCSSE, the AUSSE retains all five engagement scales from theNSSE. Apart from minor changes in wording, the items of each scale are also the sameas those in the NSSE. In adopting the same NSSE scales and item sets, the implicitassumption is that these are equally appropriate for investigating student engagementat Australian universities as they are for the four-year colleges and universities in theUSA. However, there are considerable differences between Australian universitiesand the four-year colleges in the USA. These differences are demonstrated by exam-ining the nature of the institutions that participate in or comprise the target populationof the AUSSE.

The AUSSE was administered to 30 Australian universities in 2009 (AUSSE2010) which represented about 80% of the universities in Australia. The averageenrolment in an Australian public university is approximately 24,000 and 95% arepublic institutions (DEEWR 2009a). Australian universities are internally diverse: allhave a relatively broad mix of disciplines, course levels and types (i.e. undergraduate,post-graduate, professional degrees and research programmes).

Further differences exist between the NSSE and the AUSSE targeted institutions.The four-year undergraduate programmes offered by the colleges that participate inthe NSSE typically provide two years of liberal arts studies followed by two years ofspecialisations. In Australia by comparison, students most commonly enrol in three-year degrees and specialise in their chosen discipline from their first year. Approxi-mately 90% of Australian undergraduates commute to universities and live at home or

Table 1. Characteristics of participating institutions in the NSSE and AUSSE, 2009.

Characteristic NSSE AUSSE

Number of participating institutionsa,b 617 30

Average student enrolment per institutionc

6200 24,000

Per cent in public controld 35 95

Nature of course Four-year degree; liberal arts

Three-year degree; specialist discipline

Breadth of degrees and course levels Narrow in degree type and course level

Broad in degree type and course level

Studentse Predominantly resident on campus; paid work on campus

Predominantly commuter students; paid work off campus

Notes: aThese figures do not include the non-US institutions that participate in the NSSE or those fromNew Zealand that participate in the AUSSE. bThe figure reported for the NSSE excludes the 17% ofcommunity colleges that participated in the CCSSE in 2009. cThe US data are from the CarnegieFoundation for the Advancement of Teaching (2010); Australian data are from DEEWR (2009a). dThisfigure for the AUSSE is from DEEWR (2009a) and includes only institutions designated as universities.eData for the US institutions are based on information and data from the Carnegie Foundation for theAdvancement of Teaching (2010) and, for Australian institutions, the ACER (2008a).

479

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 8: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

6 P. Hagel et al.

in shared accommodation (ACER 2008b). Australian students are much more likelyto have paid work, to work longer hours and to work off campus than their NSSEcounterparts (ACER 2008b). Table 1 provides a summary of the differences betweenthe institutions in which the NSSE and AUSSE are administered.

Empirical studies of the NSSE show that the residential nature of an institution hasa positive effect on engagement, while factors such as institutional size, coexistenceof both undergraduate and postgraduate students and the strength of a research focushave a negative effect (McCormick et al. 2009). These findings, in addition to thedifferences outlined between the national contexts, make both the use of the samescales and the legitimacy of the international comparison questionable. Arguably, theAUSSE should have been adapted more extensively to reflect the Australian contextin the same way that the CCSSE was adapted to reflect the distinctive role and natureof the community colleges in the USA. In particular, the large size, commuter natureand internal diversity of Australian universities demanded more consideration indesigning an instrument to evaluate the engagement of its undergraduate students.

Evaluation of the AUSSE/NSSE scales

So far this paper has discussed different concepts of student engagement and has ques-tioned the assumption that the conception and measurement of engagement that wasdeveloped in the USA for the ‘NSSE’ translate easily to the Australian context. In thissection of the paper, each of the six AUSSE scales are examined in relation to bothdifferent conceptions of engagement and the contextual differences highlighted in thepreceding discussion.

The first scale, academic challenge, is defined as the ‘extent to which expectationsand assessments challenge students to learn’ (ACER 2010, ix). It comprises 11 itemswhich take two different forms. One form distinguishes ‘how much’ students haveexperienced certain learning activities including analysing, synthesising, makingjudgements and applying theories. This scale seeks to capture ‘quality of effort’ as animportant facet of engagement (Kuh 2009). Few would argue against the notion thatsuch activities are important in higher education.

The second form of items focuses on written papers, length of papers, repetitionand time-on-task, and reveals some gaps and issues of interpretation. First, while writ-ten tasks remain critical to student learning, some legitimate and valuable forms ofassessment such as the oral or performance-based are omitted in this scale. Highereducation has diversified to encompass a wide variety of disciplines, students and typesof learning. Such diversification necessitates forms of assessment that are authentic foreach discipline and course. Second, the ‘academic challenge’ scale emphasises repe-tition and time-on-task. These have a legitimate purpose in learning but in somecircumstances may represent ‘busyness’ rather than engagement and surface ratherthan deep learning (Vibert and Shields 2004). Third, among the set of items in the‘academic challenge’ scale there are none that reflect alternative theories of engage-ment such as flow theory or an interpretive/student-centred perspective which suggestthat student control and autonomy are necessary conditions for genuine engagement(Case 2008; Steele and Fullagar 2009; Vibert and Shields 2004). Rather, the scalecontains one item that asks ‘How often have you worked harder than you thought youcould to meet a teacher’s/tutor’s standards or expectations?’ This question suggestsengagement on the teacher’s terms and may represent an unequal power relationship(Case 2008) that positions students as passive recipients of learning ‘products’

480

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 9: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

Assessment & Evaluation in Higher Education 7

(Rochford 2008). Finally, it cannot be assumed that students who answer ‘very often’to some of the questions in this scale are necessarily highly engaged. Rather, a responseof ‘very often’ may indicate that students are compliant and/or instrumental as theypursue academic approval, grades or a return on their financial investment (Case 2008;London, Downey, and Mace. 2007; Mann 2001; Rochford 2008).

Active learning refers to ‘students’ efforts to actively construct their knowledge’(ACER 2010, ix). This scale reflects a social constructivist theory of learning. Itcomprises seven items that ask students about how often, for example, they work withothers or participate in voluntary activities. This scale not only captures some impor-tant aspects of engagement but also exhibits some gaps and some issues in interpretingthe behaviours measured by this scale. For example, one item asks about how oftenstudents ask questions or contribute to discussions. Asking questions and discussingideas with others can be critical for students in clarifying their knowledge and extend-ing their understanding. However, students construct their knowledge also throughprocesses of reflection (Chi 2009). This is particularly true for adult learners and thosestudying online or by distance education who often study independently.

Further, while question-asking is a useful behaviour for students to clarify theirunderstanding, it can also be indicative of other motives that may not be aboutconstructing knowledge. For example, a study of law students by London, Downey,and Mace (2007) revealed that students may ask questions or contribute to discussionsto validate their intellectual abilities, to demonstrate superiority and/or to impresspeers and teachers. Additionally, disciplines vary in the extent to which they encour-age and reward questioning and discussion. For example in the humanities whereknowledge is more contestable, question-asking and discussion will be promoted bythe pedagogy and may well be a sign of active learning (Parpala et al. 2010), but thisis not necessarily the case for areas of science, engineering and business (Brint,Cantwell, and Hanneman 2008).

In summary, if asking questions does not always signal engagement, then theopposite is true also: failure to ask questions may not signal disengagement. Rather,the absence of questioning could indicate preferences for independence, self-regulation and/or reflection. The absence of control and autonomy in the AUSSEscales has been noted already; the concept of reflection is also missing. Arguably,reflection is integral to students’ attempts to be active learners.

Student and staff interaction is defined as the ‘level and nature of students’ contactwith teaching staff’ (ACER 2010, ix). The items in this scale focus on instrumentalexchanges between staff and students about assignments and careers. This scaleincludes some of Chickering and Gamson’s (1999) principles of good teaching:student–teacher contact and prompt feedback from teachers. Feedback is important insupporting student engagement. However, alternative theories of engagement suggestthat this scale could overlook other important aspects of engagement. For example,flow theory suggests that feedback that enhances engagement can take other forms orderive from sources other than teachers. For example, clarity of instructions cansubstitute for feedback (Steele and Fullagar 2009). Secondly, feedback can come fromthe activity itself, from peers and/or from technology. In particular, the widespreadand significant use of online technologies and learning management systems in highereducation have democratised and extended the sources of feedback. Possibly, theinfluence of peers in providing supportive feedback is captured in the ‘supportivelearning environments’ scale. However, the influence of feedback generated by thetask or by technology is not.

481

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 10: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

8 P. Hagel et al.

Supportive learning environment is defined as ‘feelings of legitimacy within theuniversity community’ (ACER 2010, ix). This scale seeks to capture the nature of therelationships that students have with academic staff, other employees of the universityand fellow students that are not directly related to their learning programme. Theserelationships are important in providing emotional support to students (Fredricks,Blumenfeld, and Paris 2004) and, as noted earlier in the discussion, these may also beimportant additional sources of feedback to students. The inclusion of this scaleprovides a valuable counterbalance to the more instrumental focus of the otherengagement scales.

To complement the academic activities of the university, the enriching educa-tional experiences scale investigates students’ ‘participation in broadening educa-tional activities’ (ACER 2010, ix). The 12 items of this scale reflect a view thatengagement should encompass a variety of extra-curricular activities, such as practi-cums, study abroad schemes, foreign language studies and interactions with peoplewho are ‘different’ from the respondent. This scale captures important aspects ofengagement; however, there are some contextual issues to consider in using the scaleand interpreting item scores. First, while some items capture aspects of engagementthat are important for residential students who live and work on campus, fewer relateto students who commute to university and work off campus. Second, given the wide-spread use of learning management systems in Australian universities, ‘used an onlinelearning system to discuss or complete an assignment’ may not necessarily representan enriching experience. However, this question could be extended to capture otheraspects of online engagement including using the technology for student-organisedactivities and study groups. Finally, from a critical perspective, there may be difficul-ties in interpreting the meaning of high or low scores on some items. For example,some students may experience or perceive barriers to engaging in particular activitiesdue to economic or social constraints, their personality and/or their minority status.Low scores may suggest these students are not engaged. An alternative interpretationis that students may be highly engaged with their study but fail to take up the oppor-tunity of foreign study or extra-curricular activities because they lack the economicand/or social capital to do so.

The final engagement scale in the AUSSE is work-integrated learning, whichrefers to the ‘integration of employment-focused work experiences into study’ (ACER2010, ix). This is a scale specifically developed for the AUSSE. For many students,the integration of work and study is highly engaging. However, the questions in thisscale are predominately framed to investigate how students engage with work ratherthan how working adults engage with learning. Clearly, an assumption is made aboutthe demographics of the ‘normal’ student. Further, by emphasising the skills andknowledge acquired, the scale also underplays the full meaning that work may havefor some students (Cheng and Alcantara 2007; Muldoon 2009).

In summary, the AUSSE engagement scales reflect a predominantly functionalideology and are concerned with the general behaviour of students at university andhow this behaviour is influenced by institutional factors. These scales capture manyimportant aspects of the academic, extra-curricular and social activities experiencedby students. They provide a useful means for promoting a discussion about studentengagement within the higher education sector (Kuh 2009). They also serve to focusthe attention of academics and universities on aspects of their performance that can beimproved to enhance the engagement of their students (Pike 2006). However, thesescales have some limitations. These limitations take three main forms. First, there is

482

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 11: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

Assessment & Evaluation in Higher Education 9

the omission of some key concepts such as autonomy and reflection and features suchas feedback from non-teacher sources and non-written assessment forms. Second,some of the items require further adjustment for the Australian context. Finally, froma critical perspective, the scales may be limited in the extent to which they can illumi-nate the condition of engagement (or alienation) of all students. Students who appearto be engaged may be merely compliant and busy; others may be disengaged throughlack of choice or power.

Further issues: findings from the empirical literature on the validity of the engagement scales

As mentioned previously, the Australian Federal government has indicted that it plansto include measures of student engagement in funding arrangements for Australianpublic universities. Underlying this is an aim to increase the number of Australiansholding a higher degree: the government has stated that by 2025, 40% of all 25–34-year olds will hold a qualification at bachelor level or above. The government hasrecognised that to achieve this target requires an emphasis on improving retention,progress and completion rates (DEEWR 2009a). However, links between studentengagement as measured by the AUSSE engagement scales and outcomes such asretention, progress and completion may be tenuous at best: studies of the predictivevalidity of the NSSE engagement scales reveal weak to modest results (Carini, Kuh,and Klein 2006; Gordon, Ludlum, and Hoey 2008; Kuh 2004).

Small but significant relationships of between .02 and .17 have been foundbetween the scales and measures of academic performance including grade pointaverage (GPA) (Carini, Kuh, and Klein 2006; Kuh 2004). Pascarella, Seifert, andBlaich (2010) found some evidence that the NSSE scales were related to outcomessuch as critical thinking, moral reasoning and intercultural effectiveness. However,for the most part the scales have been found to have ‘minimal explanatory power’Gordon, Ludlum, and Hoey (2008, 19). Further, some studies have found relation-ships that are the reverse of what is predicted by the concept of engagement thatunderlies the NSSE. For example, Gordon, Ludlum, and Hoey (2008) found aninverse relationship between ‘enriching educational experiences’ and the GPA forfirst-year students and a negative relationship between senior students’ GPA and‘faculty-staff relationships’. Pike (2006) found a negative relation between ‘enrichingeducational experiences’ and gains in practical skills. Additionally, single itemswithin scales have been found to have the reverse relationship to outcomes. Forexample, Gordon, Ludlum, and Hoey (2008) found that ‘discussion of readings withstaff’ was negatively related to first-year GPA, and student–staff discussion outsideof class was negatively related to first-year retention. Most recently, Pascarella,Seifert, and Blaich (2010, 19) stated that ‘the bottom line is that we have, at present,very little internally valid evidence with respect to the predictive validity of theNSSE’. While similar studies of the predictive validity of the AUSSE scales have yetto be published, there is little reason to assume that the performance of the AUSSEscales would differ markedly from those of the NSSE given the close similaritiesbetween them.

To summarise, with the exception of ‘supportive learning environments’ and to alesser extent, ‘academic challenge’, the empirical evidence does not provide strongsupport for the predictive validity of the NSSE scales. There is only weak evidencethat these scales are linked to outcomes of importance for the Australian government.

483

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 12: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

10 P. Hagel et al.

Conclusion and implications

It seems that by borrowing its student engagement scales from the USA, Australia hasadopted a conception of student engagement and a measurement instrument that failsto capture some important aspects of engagement. There are contextual differencesbetween the higher education systems of the two countries that raise questions abouthow well the scales apply to undergraduate students currently attending Australianuniversities.

Further, given the complexity (and contrariness) of the relationships betweenengagement and outcome measures, it is critical that data from the AUSSE survey arenot misused by policy-makers and university management. While universities shouldbe accountable for the quality of the student experience they provide, their differentcontexts and missions must be acknowledged in interpreting their performance asmeasured by the AUSSE. Notwithstanding the measurement issues, performance datafrom the AUSSE are best used internally by universities to develop initiatives anddrive change in ways that are consistent with their individual contexts and missions(Carle et al. 2009). However, in doing so, the evidence suggests that the scales maymask areas that need improvement. In these circumstances, universities could considerchoosing relevant and single items to monitor improvements (Carle et al. 2009). Thisis particularly true for the ‘enriching educational experience’ scale. It contains a rangeof practices, not all of which may be relevant to the institution’s mission or context.Additionally, universities need to be careful in making internal, cross-disciplinarycomparisons – it is not at all clear that the nature of engagement is, or ought to be, thesame across disciplines.

The findings of this study point to gaps in our understanding of engagement andhow it is measured. Empirical research is required to examine whether refinementsto the scales and the inclusion of additional aspects of student engagement improvetheir predictive validity. Such research may also contribute to the development of anew instrument to investigate the first-year experience as recently proposed by theAustralian government (DEEWR 2009b). However, without conducting researchfrom different ideological perspectives, it is unlikely that student engagement and thecontribution it makes to important outcomes such as learning, progression andretention can be fully understood.

Notes on contributorsDr Pauline Hagel is an independent education consultant. She was formerly a senior lecturer inmanagement and an associate head of School Teaching and Learning at Deakin University. Herresearch and expertise include student university choice, student engagement, assessment,curriculum development and online teaching and learning.

Dr Rodney Carr is a senior lecturer in the Deakin Graduate School of Business, Deakin Univer-sity, Australia. His research is about ‘engagement’ in general. His areas of particular interestinclude aspects of student engagement in learning and engagement of community members incollective actions.

Prof. Marcia Devlin is the inaugural chair in Higher Education Research at Deakin University,Australia. Her research interests and expertise span higher education policy, equity, standards,leadership, interdisciplinarity, teaching and learning and student engagement and learning.

ReferencesACER. 2008a. Attracting engaging and retaining: New conversations about learning.

Australasian student engagement report. Melbourne: ACER.

484

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 13: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

Assessment & Evaluation in Higher Education 11

ACER. 2008b. Respondent characteristics reports – Deakin University. Melbourne: ACER.ACER. 2010. Doing more for learning: Enhancing engagement and outcomes. Australasian

student engagement report. Melbourne: ACER.Astin, A.W. 1985. Achieving educational excellence: A critical analysis of priorities and

practices in higher education. San Francisco, CA: Jossey-Bass.Brint, S., A.M. Cantwell, and R.A. Hanneman. 2008. The two cultures of undergraduate

academic engagement. Research in Higher Education 49, no. 5: 383–402.Bryson, C., and L. Hand. 2007. The role of engagement in inspiring teaching and learning.

Innovations in Education and Teaching International 44, no. 4: 349–62.Carini, R., G.D. Kuh, and S. Klein. 2006. Student engagement and student learning: Testing

the linkages. Research in Higher Education 47, no. 1: 1–32.Carle, A.C., D. Jaffe, N.W. Vaughan, and D. Eder. 2009. Psychometric properties of three

new national survey of student engagement based engagement scales: An item responsetheory analysis. Research in Higher Education 50, no. 8: 775–94.

Carnegie Foundation for the Advancement of Teaching. 2010. Carnegie Classifications DataFile. http://classifications.carnegiefoundation.org/resources/.

Case, J.M. 2008. Alienation and engagement: Development of an alternative theoreticalframework for understanding student learning. Higher Education 55: 321–32.

CCSSE. 2010. The relationship of the Community College Survey of Student Engagement(CCSSE) and the National Survey of Student Engagement (NSSE). http://www.ccsse.org/aboutccsse/relate.cfm (accessed June 14, 2010).

Cheng, D.X., and L. Alcantara. 2007. Assessing working students’ college experiences:A grounded theory approach. Assessment & Evaluation in Higher Education 32, no. 2:301–11.

Chi, M.T.H. 2009. Active-constructive-interactive: A conceptual framework for differentiat-ing learning activities. Topics in Cognitive Science 1, no. 1: 73–107.

Chickering, A.W., and Z.F. Gamson. 1999. Development and adaptations of the sevenprinciples for good practice in undergraduate education. New Directions for Teaching &Learning 80: 75–81.

Coates, H. 2010. Development of the Australasian survey of student engagement (AUSSE).Higher Education 60, no. 1: 1–17.

Csikszentmihalyi, M. 1990. Flow: The psychology of optimal experience. New York: Harper& Row.

DEEWR. 2009a. Students: Selected higher education statistics. http://www.deewr.gov.au/HigherEducation/Publications/HEStatistics/Publications/Pages/Students.aspx (accessedJune 13, 2010).

DEEWR. 2009b. An indicator framework for higher education performance funding. Discus-sion paper. http://www.deewr.gov.au/HigherEducation/Pages/IndicatorFramework.aspx(accessed December 20, 2009).

Feldman, K.A., J.C. Smart, and C.A. Ethington. 2004. What do college students have to lose?Exploring the outcomes of differences in person-environment fit. Journal of HigherEducation 75, no. 5: 529–55.

Fredricks, J.A., P.C. Blumenfeld, and A.H. Paris. 2004. School engagement: Potential of theconcept, state of the evidence. Review of Educational Research 74, no. 1: 59–109.

Gordon, J., J. Ludlum, and J. Hoey. 2008. Validating NSSE against student outcomes: Arethey related? Research in Higher Education 49, no. 1: 19–39.

Kuh, G.D. 2004. The national survey of student engagement: Conceptual frameworkand overview of psychometric properties. Bloomington, IN: Indiana UniversityCenter for Postsecondary Research and Planning. http://nsse.iub.edu/pdf/conceptual_framework_2003.pdf (accessed April 1, 2008).

Kuh, G.D. 2009. The national survey of student engagement: Conceptual and empirical foun-dations. New Directions for Institutional Research, Spring, no. 141: 5–20.

LaNasa, S., A. Cabrera, and H. Trangsrud. 2009. The construct validity of student engage-ment: A confirmatory factor analysis approach. Research in Higher Education 50, no. 4:315–32.

London, B., G. Downey, and S. Mace. 2007. Psychological theories of educational engage-ment: A multi-method approach to studying individual engagement and institutionalchange. Vanderbilt Law Review 60, no. 2: 455–81.

485

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014

Page 14: Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): a critique

12 P. Hagel et al.

Mann, S. 2001. Alternative perspectives on the student experience: Alienation and engage-ment. Studies in Higher Education 26, no. 1: 7–19.

McCormick, A.C., G.R. Pike, G.D. Kuh, and P.D. Chen. 2009. Comparing the utility of the2000 and 2005 Carnegie Classification systems in research on student’ college experienceand outcomes. Research in Higher Education 50, no. 3: 144–67.

Moffatt, M. 1991. College life: Undergraduate culture and higher education. Journal ofHigher Education 62, no. 1: 44–61.

Muldoon, R. 2009. Recognizing the enhancement of graduate attributes and employabilitythrough part-time work while at university. Active Learning in Higher Education 10: 237–52.

NSSE. 2009. Assessment for improvement: Tracking student engagement over time. http://nsse.iub.edu/NSSE_2009_Results/pdf/NSSE_AR_2009.pdf#page=4.

Parpala, A., S. Lindblom-Ylanne, E. Komulainen, T. Litmanen, and L. Hirsto. 2010. Students’approaches to learning and their experience of the teaching-learning environment indifferent disciplines. British Journal of Educational Psychology 80, no. 2: 269–82.

Pascarella, E.T., T. Seifert, and C. Blaich. 2010. Validation of the NSSE benchmarks and deepapproaches to learning against liberal arts outcomes. Paper presented at the annual meetingof the Association for the Study of Higher Education, November, Jacksonville, FL.

Pike, G. 2006. The dependability of NSSE scalelets for college- and department-levelassessment. Research in Higher Education 47, no. 2: 177–95.

Rochford, F. 2008. The contested product of university education. Journal of HigherEducation Policy and Management 30, no. 1: 41–52.

Steele, J.P., and C.J. Fullagar. 2009. Facilitators and outcomes of student engagement in acollege setting. Journal of Psychology 143, no. 1: 5–27.

Vibert, A.B., and C. Shields. 2003. Approaches to student engagement: Does ideologymatter? McGill Journal of Education 38, no. 2: 221–40.

486

Dow

nloa

ded

by [

Nor

thea

ster

n U

nive

rsity

] at

20:

48 1

5 N

ovem

ber

2014


Top Related