authentically assessing graduate teaching: outside and beyond neo-liberal constructs

19
Authentically assessing graduate teaching: outside and beyond neo-liberal constructs Andrea C. Allard Diane Mayer Julianne Moss Received: 4 June 2013 / Accepted: 11 December 2013 Ó The Australian Association for Research in Education, Inc. 2013 Abstract In this paper, we challenge the current focus on ‘best practice’, graduate teacher tests, and student test scores as the panacea for ensuring teaching quality and argue for ways of thinking about evidence of quality beginning teaching outside and beyond the current neoliberal accountability discourses circulating in Australia and other countries. We suggest that teacher educators need to reinsert themselves as key players in the debates around quality beginning teaching, rather than being viewed as a source of the problem. To enable teacher educators to assume accountability for quality beginning teachers, we propose the framework of a capstone teacher performance assessment—a structured portfolio called the Authentic Teacher Assessment (ATA)— and examine examples of these assessments through the lens of critical discourse analysis. As a measure of ‘readiness to teach’, the ATA is compared with supervising teachers’ assessments of preservice teachers. We argue that structured portfolios that include artefacts derived from preservice teachers’ practice in classrooms along with graduate teacher self assessments provide a stronger accountability measure of effec- tive beginning teaching and demonstrably address the current anxiety regarding ‘evi- dence’. We suggest that such an approach should be reliable enough to be ‘read’ by external assessors (and moderated across other teacher education institutions). Rigor- ous research on a national basis is called for in order to develop and implement a structured portfolio as rich evidence of graduates’ quality and readiness to teach. Keywords Authentic teacher assessment Á Capstone assessment Á Graduate teaching professional standards Á Initial teacher education Á Quality teaching A. C. Allard Á J. Moss (&) Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia e-mail: [email protected] D. Mayer Victoria University, PO Box 14428, Melbourne, VIC 8001, Australia 123 Aust. Educ. Res. DOI 10.1007/s13384-013-0140-x

Upload: julianne

Post on 26-Jan-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

Authentically assessing graduate teaching: outsideand beyond neo-liberal constructs

Andrea C. Allard • Diane Mayer • Julianne Moss

Received: 4 June 2013 / Accepted: 11 December 2013

� The Australian Association for Research in Education, Inc. 2013

Abstract In this paper, we challenge the current focus on ‘best practice’, graduate

teacher tests, and student test scores as the panacea for ensuring teaching quality and

argue for ways of thinking about evidence of quality beginning teaching outside and

beyond the current neoliberal accountability discourses circulating in Australia and

other countries. We suggest that teacher educators need to reinsert themselves as key

players in the debates around quality beginning teaching, rather than being viewed as a

source of the problem. To enable teacher educators to assume accountability for quality

beginning teachers, we propose the framework of a capstone teacher performance

assessment—a structured portfolio called the Authentic Teacher Assessment (ATA)—

and examine examples of these assessments through the lens of critical discourse

analysis. As a measure of ‘readiness to teach’, the ATA is compared with supervising

teachers’ assessments of preservice teachers. We argue that structured portfolios that

include artefacts derived from preservice teachers’ practice in classrooms along with

graduate teacher self assessments provide a stronger accountability measure of effec-

tive beginning teaching and demonstrably address the current anxiety regarding ‘evi-

dence’. We suggest that such an approach should be reliable enough to be ‘read’ by

external assessors (and moderated across other teacher education institutions). Rigor-

ous research on a national basis is called for in order to develop and implement a

structured portfolio as rich evidence of graduates’ quality and readiness to teach.

Keywords Authentic teacher assessment � Capstone assessment � Graduate

teaching professional standards � Initial teacher education � Quality

teaching

A. C. Allard � J. Moss (&)

Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia

e-mail: [email protected]

D. Mayer

Victoria University, PO Box 14428, Melbourne, VIC 8001, Australia

123

Aust. Educ. Res.

DOI 10.1007/s13384-013-0140-x

Introduction

In the past decade, the quality of teachers has become the focus of much policy

debate (Organization for Economic Cooperation & Development (OECD) 2005;

Townsend and Bates 2007). In Australia, concern has been raised about the quality

of teacher preparation and the teaching profession often because of what is seen as

poor competitive scores on international assessment programs such as the

Programme of International Student Assessment (PISA) (e.g. Department of

Education and Early Childhood Development 2012). This has been accompanied by

a view that there is one best practice model of teaching that all teachers in

preparation should learn to implement so that global comparative measures will

show higher results for Australia, what Bullough cautions as the ‘seductive pursuit

of what we now call ‘best practice’: namely, single, best solutions, to complex

problems.’ (Bullough 2012, p. 344).

This has been accompanied by claims about the ‘ineffectiveness’ of initial

teacher education (ITE), which regularly emerge from surveys of first year teachers

by employers, teacher registration authorities and some researchers where the

beginning teachers highlight the ‘reality shock’ of beginning teaching and attribute

this to poor preparation that is not practical (e.g. calls for more time in schools and

less ‘theory’) rather than the demands of the job (Louden 2008). As a result, teacher

education has been positioned as a ‘policy problem’ (Cochran-Smith and Fries

2005) often with an accompanying and increasingly complex ‘apparatus of

certification and regulation’ (Connell 2009, p. 214) designed to keep teachers and

ITE courses under close surveillance.

In the Australian context, neoliberal discourses have emphasised teacher

professional standards as a means of guaranteeing quality and holding teachers

and teacher educators accountable. Federal government investment in quality

teaching and teacher education effectiveness via their smarter schools—improving

teacher quality national partnership program (TQNP) has so far resulted in the

introduction of alternative pathways into teaching (Teach for Australia and Teach

Next), the establishment of School Centres for Teaching Excellence designed to

enhance the practicum experience for preservice teachers, and the development of

national professional standards for teachers and nationally consistent accreditation

of teacher education programs and teacher registration. This is a particular example

of the way in which governments have sought to manage and reshape the teaching

profession, all in the pursuit of quality teaching.

Standards have been critiqued widely as inappropriate determinants of quality

teaching for a range of reasons including the failure to account for contexts and the

complexities of teachers’ work (Connell 2009; Tuinamuana 2011). Graduates are

judged as meeting the standards using a range of not always reliable approaches, e.g.,

tick a box approach to a list of competencies; proxies like passing university

assignments; supervising teachers’ subjective comments. We argue that an alternative

and more meaningful way of judging the quality of graduating teachers instead of the

pass/fail summative assessments (e.g. practicum supervisors’ reports) and graded

assessments (e.g. university assignments) is a structured portfolio as a capstone

assessment wherein graduating teachers demonstrate their capacities to plan, teach

A. C. Allard et al.

123

and assess in ways that take account of the particular context and the students with

whom they are working. Thus, teacher educators can provide evidence of the

effectiveness of the teachers being prepared with authentic assessment of beginning

teaching that captures teaching in all its complexity. Quality teaching, that is, teaching

that addresses the academic, social and emotional learning needs of all students and in

a diversity of contexts, can be demonstrated through authentic assessment. In this way,

we argue that teacher educators can contribute in professionalizing teacher education

and framing the teacher education system into the 21st century.

Judging effective beginning teaching

Notions of the effective teacher have changed over time. The early 20th century saw

a turn to psychology to provide a unique knowledge base for teaching. The resultant

technical-professional model of teaching gradually led to teacher education being

moved to universities and an underpinning teacher-scholar model of teaching with a

focus on ‘the reflective practitioner’ in the late 20th century (Connell 2009). More

recently, we have seen the move back towards a technical view of effective teaching

connected with the growth of a market-oriented and cultural order, and resulting in a

neoliberal governance of teaching itself built on a distrust of teachers and viewing

the profession as an anti-competitive monopoly (Connell 2009). This neoliberal

governance is characterised by lists of auditable performances such as those often

seen in professional standards for teachers. Part of the TQNP program noted above

included the development of national professional standards for teachers and

principals (Australian Institute of Teaching and School Leadership 2011b, c) as well

as national program standards for the accreditation of teacher education programs

(Australian Institute of Teaching and School Leadership 2011a).

However, at the moment, entry to the teaching profession in Australia is regulated by

state agencies that still use input models to make decisions about teacher registration and

readiness to teach. Judgments are made about the quality of a teacher education program

usually by paper review involving a panel of stakeholders deciding on the likelihood that

the program will prepare a competent beginning teacher. Then, employers and teacher

registration authorities use proxies like completion of the accredited teacher education

program, grades in university subjects or practicum evaluation forms and observations

of teaching to make a judgment about a graduating teacher‘s level of professional

knowledge and practice—about their readiness to teach. However, authentic assess-

ments of the actual professional practice of teachers in the workplace, incorporating

multiple measures, and focussing on judging the impact of teachers on student learning,

are seldom used as means to assess graduate readiness to teach.

As Connell’s (2009) analysis shows, the lists of current standards do not appear

to come from a systematic view of Education as a field of knowledge, and

‘teaching’s daily reality is an improvised assemblage of a very wide range of

activities’ (p. 219), so the issue is how to capture the complex reality of teaching in

an authentic way. In Australian and elsewhere there is a growing interest amongst

educators, evaluators, policy makers and school systems to develop other forms of

assessment that are both trustworthy and reflect the nature of teachers’ work

Authentically assessing graduate teaching

123

(Darling-Hammond and Snyder 2000; Darling-Hammond 2013; The State of

Queensland, Queensland College of Teachers 2012). It is particularly important to

foreground teachers’ judgment and challenge a ‘standards framework that embeds

the neoliberal distrust of teachers’ judgment’ (Connell 2009, p. 220).

Teacher educators in Australia have begun exploring, implementing and investi-

gating various approaches to authentic assessment of teaching to inform this direction

(e.g. Dixon et al. 2011; Sim et al. 2012). A recent report for the Queensland College of

Teachers, the teacher regulatory authority in that state, suggests that:

Authentic assessment requires preservice teachers to deploy combinations of

knowledge, skills, and dispositions in their professional life. Authentic

assessment makes the core aspects of teaching visible and measurable against

a set of agreed standards. Authentic tasks engage preservice teachers in

processes that are necessary to act professionally in planning curriculum units

for a specific group of students, designing episodes of teaching, teaching, and

evaluating the effectiveness of their teaching. Authentic assessment, therefore,

requires preservice teachers to be explicit about their thinking and decision-

making in designing teaching episodes, to reference the sources and rationale

for their ideas, and to reflect upon the actual teaching experience and plans for

revising and redesigning the teaching episodes. This dissolves the division

between theory and practice and creates a system of reflective practice that

adds to the professional knowledge of teaching. (The State of Queensland

(Queensland College of Teachers) 2012, p. 25)

Portfolio assessments (both structured or unstructured) are often used in teacher

preparation programs, usually as a capstone assessment (St. Maurice and Shaw

2004). An example of a structured portfolio that has been used for high stakes

credentialing decisions is the Performance Assessment for California Teachers

(PACT). PACT represents a multiple measures assessment used for initial teacher

registration in California. It is designed to collect evidence of preservice teachers’

content and pedagogical knowledge as well as higher-order thinking skills

(Pecheone and Chung 2006) and assesses ‘the planning, instruction, assessment,

and reflection skills of student teachers against professional standards of practice’

(Darling-Hammond 2006, p. 121). The tasks ‘are designed to measure and promote

candidates’ abilities to integrate their knowledge of content, students and

instructional context in making instructional decisions and to stimulate teacher

reflection on practice’ (Pecheone and Chung 2006, p. 24).

Redesigning teacher education assessment

At Deakin University, we drew on both the structure and the content of PACT to

inform the design, implementation and evaluation of what is known as the Deakin

Authentic Teacher Assessment (ATA) where graduates of the teacher education

programs demonstrate their effectiveness in relation to the work of teachers in the

workplace as framed by the Standards of Professional Practice for Graduating

Teachers (Victorian Institute of Teaching 2007). Like PACT, the ATA is designed

A. C. Allard et al.

123

to include ‘multiple measures that allow a comprehensive view of what candidates

learn and what a program contributes to their performance’ (Darling-Hammond

2006, p. 135). It recognises that teaching involves four interconnected stages:

(i) planning and preparation (ii) classroom teaching (iii) assessment and feedback

(iv) reflection and professional dialogue and decisions linked to future teaching

sessions. The ATA requires candidates to submit a structured portfolio including

teaching plans, teaching artefacts, student work samples, video clips of teaching,

and personal reflections and commentaries, which are organized in four categories

to reflect the regular ongoing work of teachers in the classroom over time, in cycles

of planning, teaching, assessment, and reflection. The ATA is assessed using rubrics

aligned with the Standards of Professional Practice for Graduating Teachers

(Victorian Institute of Teaching 2007).

The Deakin ATA was first implemented in 2010 as a compulsory capstone

summative assessment in the new Master of Teaching postgraduate teacher education

program. Similar to the PACT in California, the ATA has five components designed

as activities that reflect components of the teaching experience.

1. Context for learning Preservice teachers are required to write about the learning

context within which they are working, describing the school and the classes

they teach and factors impacting on the learning environment.

2. Planning teaching and assessment Preservice teachers describe, explain, and

justify their teaching and assessment plan for a sequence of 5–8 lessons.

3. Teaching students and supporting learning Preservice teachers videotape

themselves teaching, submit a 10-min segment of the video, and contextualise

and reflect on the video segment in an accompanying written statement.

4. Assessing student learning Preservice teachers report on their assessment tasks

providing samples of students’ work and describe how the assessment outcomes

inform ongoing planning and teaching.

5. Reflecting on teaching and learning: Preservice teachers provide an analysis of

their teaching practice and students’ learning and how they have used this to

improve their teaching practice.

(Deakin University 2012)

While linked clearly to the work of teachers, this structured approach provides space

for graduating teachers to demonstrate their professional knowledge and skills,

while also allowing for their personal creativity and reflexivity. Capstone

assessments may be accessed by three key stakeholders of teacher education—the

graduating teacher, the school mentor, (with whom typically the preservice teacher

is working at least 50 % of the school week) and the university academic. At

Deakin, the capstone task is assessed by the university academic. Elsewhere,

supervising teachers are also involved in assessing the capstone.

Methodology

Standards are part of globalizing neoliberal agendas, where ‘accountability’,

‘quality teaching’ and the related notions of ‘evidence’ are examples of the

Authentically assessing graduate teaching

123

institutional and cultural constructions of teaching circulating in Australian policy

environments. In this paper we use Critical Discourse Analysis (Fairclough 2000;

Kennedy and Doherty 2012; MacLure 2003) to examine how ‘readiness to teach’ is

discursively produced in two different types of assessments: Supervisor Practicum

Reports and the Deakin Authentic Teacher Assessment.

Both are texts aligned to the professional standards for graduating teachers as

specified in the state of Victoria (Victorian Institute of Teaching 2007) and each can be

understood as not only reflecting but also constructing understandings of ‘graduate

teacher’ and ‘readiness to teach’. We found Graham and Luke’s (2013) definition of

discourse as ‘institutionally and culturally structured patterns of meaning making’ (p.

105) helpful in understanding how texts such as the graduate standards are part of

governing discourses. As Fairclough and Wodak (1997) suggest, ‘A useful working

assumption is that any part of any language text, spoken or written, is simultaneously

constituting representations, relations, and identities’ (p. 275). How are notions of

quality teaching and graduate teacher represented in these texts? How are teacher

practices/relationships constructed in these texts? CDA enables texts to be interro-

gated at a number of differing levels. By putting CDA to work, a critical perspective is

brought to how constructs of ‘quality teacher’ and ‘readiness to teach’ are potentially

produced and reproduced through key forms of assessment.

Analysis of the language used provides a way of examining meaning and

significance embedded in texts. Conversely what is not said also matters because

this may signify taken-for-granted beliefs that need to be interrogated or an absence

of alternative meanings that might work to challenge the status quo, or to speak

about relations of power—e.g., who has the power to define ‘quality teaching’ and

teacher readiness, and who doesn’t. ‘The silences around subjects, the repetition of

images and phrases, or contradictory statements, together with the declared

positions can suggest underlying values and beliefs and relations of power’ (Allard

and Santoro 2008, p. 207). Kennedy and Doherty (2012) caution that ‘in adopting a

CDA approach, it is not enough simply to examine the words themselves, it is also

important to consider the wider context within which the [text] has been produced’

(p. 3). With this in mind, we have offered an analysis of the current policy context of

teacher education. Next we examine two different kinds of everyday texts to

consider how each endorses or challenges particular understandings of ‘graduate

teacher’ and what they should be able to do. Fairclough (2000) suggests, ‘CDA can

constitute a resource for struggle as it does not isolate language but addresses the

shifting network of practices in a way which produces both clearer understanding of

how language figures in hegemonic struggles around neo-liberalism, and how

struggles against neo-liberalism can be partly pursued in language’ (p. 148).

Textual analyses

The following documents form our data sources:

(1) School experience reports completed by the school based supervisor/mentor

and/or school coordinator

A. C. Allard et al.

123

(2) The ATA texts developed by six graduating teachers

In the teacher education course, candidates complete three separate periods of

professional experience in either primary or secondary schools.1 The ATA is

completed during the final professional experience unit and work towards this is

undertaken in the final five-week practicum. While the overall project that formed

the basis for this paper examined the final submitted Authentic Teacher

Assessments of 60 Master of Teaching preservice teachers who completed in the

2011 Australian academic calendar year, for this paper purposeful sampling has

been used to select examples of the ATAs prepared and submitted by six graduating

teachers. The six examples were chosen as representative of gender (three males,

three females), the type of qualification they were gaining (primary, secondary or

P-12), and with consideration of the spread of marks achieved: two with a pass mark

(50–59); one with a credit, (60–69) one distinction (70–79) and two with a high

distinction (80–100). The decision to analyse ATAs across a spread of graded marks

was made in order to consider whether the texts produced by the six preservice

teachers constructed ‘readiness to teach’ in identifiably different ways. Along with a

close reading of the six samples of ATA work, the assessors’ scoring rubrics and the

school experience reports completed by the school based supervisor for each of the

preservice teachers selected, were also examined. The professional experience

reports (n = 18) were scrutinised by the authors, first independently to identify key

comments in each of these documents. Next, a series of dialogical conversations

among the authors were scheduled and a matrix for the analysis of the textual data

from the supervisors’ reports and the ATAs was developed to record the emergent

findings for review, comparison and subsequent final analysis.

Supervising teachers’ assessments

The assessment reports of supervisors count significantly towards the candidate

completing the teacher education course; that is, they cannot successfully graduate

without passing the practicum which is assessed solely by the supervising teacher.

This positions supervising teachers powerfully as gatekeepers to the profession and

indirectly holds them accountable for the success or otherwise of the preservice

teacher. Critiques of this approach to making judgements about readiness to teach

have been made. Some research suggests that candidates who comply most closely

with their supervisors’ teaching processes may be assessed more favourably. For

example, Courneya et al. (2008) in discussing the ‘observer’s search for self’ found

that ‘Practices concordant with what the reviewer ‘‘did’’ or ‘‘would do’’ were

evaluated positively. Practices which reviewers would not do were typically

evaluated negatively…’ (p. 70). Darling-Hammond and Snyder (2005) also note

that assessments such as the practicum report do ‘not address important differences

in context and content…’ (p. 525). Larson (2010) concurs, suggesting ‘Assuming

that effective teaching can be guaranteed or even measured by isolating sets of skills

1 Master of Teaching—Early Childhood has been offered as part of the course from 2012 onwards.

However, because the data assessed here is from 2011, no ATAs completed by candidates in the Early

Childhood strand are analysed as part of this paper.

Authentically assessing graduate teaching

123

or competencies ignores the highly contextualised, complex and adaptive nature of

teaching’ (p. 16–17).

Yet, assessment of preservice teachers’ practicum performances against the

criteria of graduate standards, often a set of competencies, is done and commonly

uses a generic report form that does not allow for differences in contexts. This is

certainly true of the report form used for professional experience assessments in the

Master of Teaching course at Deakin University. The form sent to all supervising

teachers consists of four pages and is closely tied to the Standards for Graduating

Teachers (Victorian Institute of Teaching 2007). The three themes, ‘professional

knowledge’, ‘professional practice’ and ‘professional engagement’, are prominently

displayed on the right side of pages in the report; a middle column allows the

supervisor to indicate the progress of the preservice teacher against these areas by

using a Response Code. The Response Code consists of letters that indicate not yet

evident, beginning, consolidated or established understandings or practice. As a

judgement call, these are vague at best. Absent are the details, for example, of what

constitutes the evidence for determining ‘established’ practices. The codes used are

the same ones that many teachers use to report on their primary or secondary

students’ progress to parents. These work to position the preservice teachers

primarily as students—not as teachers-in-development. Through the language and

structure of the report, ‘readiness to teach’ is constituted as able to demonstrate the

Graduating Teacher Standards but what counts as evidence of these Standards is not

specified, nor is any evidence required.

When assessment and reporting are framed solely as and by the language of

professional teaching standards, supervising teachers have limited choices at their

disposal as to how to judge. Standards by their nature require the assessor/writer to

produce language that is competency based and impervious to substantive depth. The text

below is composed of quotes from practicum assessments of the six selected graduating

teachers. Through the language chosen and used in these texts, a preservice teacher

[shows] a wonderful level of professionalism

plans thoroughly

reflects on her lessons

[has] a professional manner

has a very bright future ahead as a teacher

is extremely thorough

is to be commended on completing an excellent first placement

[needs] a large focus on developing skills in the

areas of classroom management.

Restructuring the comments in this way helps to highlight how such phrases carry

taken-for-granted assumptions about ‘quality’ teaching and preservice teachers’

professional practice. Examining the 18 teacher reports from three practicum

placements, including the final placement, we found that language about emerging

graduate teachers was consistently shaped by the often-vague rhetoric of the

standards. For example, Erin,2 a female studying to become a primary teacher was

2 All names are pseudonyms.

A. C. Allard et al.

123

considered by her supervising teachers to be ‘intelligent’, ‘hard working’, ‘willing

to learn’. She ‘put in many hours into her planning’, ‘works hard’, was ‘confident’,

‘takes feedback on board’, and ‘involves herself in all school activities’. To what

extent do such descriptors work to adequately convey the specificities and the

quality of the teaching work done by Erin? Might such terms be read as simply

generic descriptors that could be applied to any beginning worker—e.g., a clerk or a

bureaucrat?

Another example is Jared, a male, qualified as a P-12 teacher. In his supervisors’

reports, Jared was named as being able to build ‘rapport’, and establish ‘positive

relationships with students’, critical skills needed by a beginning teacher. He also

showed ‘professional commitment’, and ‘enthusiasm’. However, explicit examples

that evidence his knowledge of student learning were largely absent. A single

comment by his supervising teacher that Jared ‘found that inquiry focused pedagogy

motivates students to learn’, indicates that his mentor knew of Jared’s capacity to

ensure that his pedagogy aligns with student learning, that is, he could demonstrate

the work of a teacher.

In our scrutiny of the school practicum documents we found that across all

sampled reports, idiomatic, persuasive, and primarily descriptive language domi-

nated. Personal attributes of the preservice teachers were often commented on,

rather than examples given of classroom-based curriculum and assessment or

pedagogical practices. Ultimately, we are concerned with the ‘consequences of this

discourse and its effects on teachers, schools and educational reform efforts’ (Larsen

2010, p. 209). (How) do the discursive responses evoked by such standards-based

assessment forms constitute graduate teachers as ready to teach? Accountability, on

the part of supervising teachers, seems to be enacted by them in their judgements

about the students, but how ‘practice’ is exemplified in the work of these soon-to-be

graduates is largely absent from the reports. Next, we consider examples from the

selected ATAs with a view to analysing how the structured portfolio constitutes the

work of graduating teachers and how discourses of ‘evidence’ and ‘readiness to

teach’ operate, or are absent, in these texts.

The Authentic teacher assessment (ATA)

From a total of 60 ATAs completed in 2011, we purposefully sampled six judged by

the assessors as having achieved results ranging from 56 (pass) to 98 (high

distinction) overall out of 100. Individually and collectively, we then reviewed each

of the selected samples, with the question in mind: (how) does the ATA work to

demonstrate readiness to teach? From close readings of each of the selected

preservice teachers’ work, a type of continuum was identified. This continuum

ranged from work that superficially paraphrased the ‘Rhetoric’ associated with

theories encountered in academic readings or statements from the Graduate

Standards (but with no informed observations or support to demonstrate how these

look in practice) to ‘Evidential’ where the graduating teacher explained clearly what

they did in regards to student learning, why they did this, why something did or

didn’t work, and what they would do differently on reflection. To ‘count’ as

evidential, the text needed to reference artefacts or explicit examples as support.

Authentically assessing graduate teaching

123

The actual structure of the ATA scaffolds the preservice teachers through the

learning and assessment process. For example, in Activity 1: Context for learning,

the school location, socio-economic factors, cultural and language backgrounds of

the students, gender ratio, available resources and other factors that impact on

teaching and learning had to be considered. At its most basic, the requirement is

addressed by accessing a school’s website and replicating the demographic data

there. Requiring preservice teachers to know the community context/background of

their students, foregrounds the importance of designing learning that will engage

specifically with the needs of those they are teaching. However, such links are not

always made by preservice teachers and this becomes evident in the ATA. For

example, Cara, who taught at a Catholic primary school in the south-eastern suburbs

of Melbourne, noted that ‘LBOTE students = 82 %; Main languages: Vietnamese,

Cantonese, Singhalese, Italian, French and Spanish’. However, while specifying the

high percentage of ESL students in her school, the unit of work and the

accompanying commentary that Cara provides in her ATA show no evidence that

she has differentiated the curriculum to cater for these second language learners, nor

does she comment on why she did not do this in her post-practicum reflections. We

suggest that she has rhetorically replicated the available demographic data without

demonstrating how she uses the knowledge of student diversity to help her plan and

teach.

So while each ATA section includes ‘prompts’ to scaffold the preservice teacher

to address the particular requirements, the variability and range of responses (i.e.,

from ‘rhetorical’ to ‘evidential’), together with the ‘evidence’ provided (or not) by

the preservice teacher provides clear and persuasive means to assess the extent that

the preservice teacher is ‘ready to teach’. The following range of examples from

each section in the ATA aim to demonstrate the continua of responses and how

evidence is cited (or not) to support claims.

Activity 2: Planning teaching and assessment

The partial commentary provided by Terry of a Year 11 Biology unit on ecosystems

is an example of a response to Activity 2 ‘Planning Teaching and Assessment’. This

can be read alongside the actual lesson plans that are attached in the ATA. Terry

says:

The teaching strategies used across the lesson plans were designed to be

progressive towards the learning goal of the practical report. The first lesson

was designed to be a building block for the students, providing them with the

basis of the unit, and the knowledge they would have to develop to

successfully complete the unit and assessment tasks. As the students were

keen learners as noted through the journal and in section 1, the lessons did not

need to overlap, as the students seemed to absorb the information. The

materials used in the lessons were great help to the students, in particular the

power point presentations because they were put on the student intranet after

every class. This meant that if a student missed something in class, or was not

at the class at all, they could go on the system and review the class, or see the

A. C. Allard et al.

123

homework they had to complete. […] The lessons allowed the students

academic development through review questions for every lesson, and how the

lesson linked to the one before, and how all topics are related. […]

Analysis of this commentary suggests a somewhat superficial approach to

planning for teaching. For example, Terry states that because the students were

‘keen learners’, ‘the lessons did not need to overlap’; yet this statement is

contradicted with his later assertion that ‘review questions for every lesson’ linking

the lesson to the previous one and relating each topic to the others contributed to

‘academic progression’. His claim that putting the power points on the intranet

aided student ‘academic development’ appears to be rhetoric-only as no evidence to

support this is provided. His metaphor that ‘students absorb the information’

suggests that Terry understands knowledge as transmitted rather than as socially

constructed.

While lesson plans and units of work have long been a requirement of initial

teacher education programs, the ATA commentary allows candidates to demonstrate

their thinking behind the processes of planning for teaching and to explicitly

reference the evidence they provide. Their own words, rather than the judgments of

supervising teachers, presents (or not) persuasive evidence of capabilities.

Activity 3: Teaching students and supporting learning

In the ATA, preservice teachers are required to videotape themselves teaching, submit

a 10-min segment of the video, and to write a commentary on the video, reflecting on

their teaching practice and how they facilitated student learning. Jared, who taught a

Year 4 class in a state primary school, uses his video of a maths class, ‘the third lesson

in a series of six, examining the four functions/operations (addition, subtraction,

multiplication and division)’ as a means to do a close analysis of both his pedagogical

approach (cooperative learning) and of student learning. He adapts the ‘genre’ of an

academic essay when he justifies his choice of pedagogies by saying:

One key teaching strategy employed in this lesson, and in many of the math

lessons I taught, was the use of co-operative learning. […] This pedagogical

approach hands over control over the learning to students, and has been shown

to have numerous learning benefits, including socialization and encouraging

divergent thinking in problem solving activities (Bobis, Mulligan and Lowrie

2009). Less able students benefit from the support of more able students as

together they work toward a common goal (Churchill et al. 2011).

This use of key literature demonstrates his capacity to read and process pedagogical

knowledge, but this could be, and often is, demonstrated through university

assignments alone. To determine whether the candidate can enact theory in the

classroom, the question needs to be asked: What does such an approach look like in

practice? Jared cites specific times in his classroom video to provide evidence and

analysis of his teaching:

Co-operative learning is evident in the video-taped activity at several points,

including: when student 1 explains his method for solving the equation [DVD

Authentically assessing graduate teaching

123

2:15] and corrects an error, stating ‘… equals 645 then take away the 6’. A

number of students correct him, stating, ‘That would be 649’. Student 3 then

adds, ‘and then you can take away another 5.’ Student four says with surprise,

‘Oh yeah. You can take away another 5’. Another student adds, ‘At least

you’re getting closer’. These students are not in competition, rather supporting

each other to evaluate the process.

Thus, in the above example and in a subsequent section of this ATA, Jared enacts

his understandings about how his primary-age students learn, his discipline

knowledge about mathematical processes, and the purposes of cooperative learning,

and backs his claims that students are learning with evidence from the video. We

use this lengthy example to demonstrate what we see as an exemplar of the far end

of the continuum of learning, what we have called ‘evidential’, that is how praxis

can be demonstrated through the use of a structured portfolio.

In contrast, Max, a Health and Physical Education preservice teacher, who

completed his final 5 weeks professional experience at a state secondary school in

the northern suburbs of Melbourne, describes the lesson that he videoed:

My video consisted of a class I had which were year 8 students in which I had

to teach a hockey unit to. It was a mixed gender class, and for this particular

session it was an indoor hockey lesson, therefore, the gymnasium was needed.

[…] The video shows a warm up and warm up game called ‘flags’ which the

students had requested in their previous lesson. The aim of this activity is to

increase the blood flow and prevent injuries whilst challenging the students to

move and think at the same time. It then moves into a station skill-set-up with

5 stations so the students can practice their skills in 5 different settings. The

aim of this activity is to allow students to practice their passing, dribbling and

shooting skills with minimal pressure. […]

Max states the learning intentions that informed his planned lesson activities,

describing what preceded and followed the 10 min of the videotaped lesson.

However, his comments centre more on the difficulties he encountered with his

students than on the skills taught and the success, or not, of the learning. Max goes

on to say:

I attempted to engage students that seemed to not be confident with particular

skills or drills with positive reinforcement and encouragement. Furthermore, I

decided to wave [move?] away from the stereotypical ‘‘drill instructor’’ with

students who weren’t motivated and tried to adopt a more accommodating

approach such as telling them that I knew they didn’t particularly enjoy some

aspects of PE but that I knew they could do better with their effort, rather than

punishing them or yelling at them when they lacked motivation. Evidence of

this can be seen at the beginning of my film clip with the 2 young female

students, who were very nice girls, however, were difficult to engage

particularly if they didn’t enjoy the sport. I found that encouraging them to do

better, worked to some extent as I had seen the reverse situation on my

observation when they were yelled at.

A. C. Allard et al.

123

The video excerpt visually portrays the problems that he encountered with trying to

teach young females to learn something in which they have little interest. While his

commentary here doesn’t demonstrate much insight into how he could design a

lesson that would be more engaging, he does recognize what doesn’t work, i.e.,

‘yelling’. He attempts to take a more personalized approach with the girls by

encouraging them to at least try. He comments on his mistakes:

The video indicated that I raised my voice far too often and tried to talk over

the students. […] Another observation that was derived from both my video

and the rest of the class was safety issues such as the correct use of hockey

sticks…

The video works as both a source of evidence, (to support what the candidate has

done/said they have done), and as a catalyst for reflection: e.g., ‘this is what went

right (or wrong), this is what I will do next time’. What Max ‘will do next time’ is

suggested in his comment: ‘Explicitly stating the rules and expectations at the

beginning of the lesson and indicating that there will be consequences if they aren’t

followed may have prevented the chatter that was occurring.’ While this might be

interpreted as rhetorical, via the ATA’s commentary and video evidence Max

provides an accurate picture of himself as a soon-to-be teacher in the process of

developing his professional perspectives.

Activity 4: Assessing student learning

To demonstrate how their plans for assessing their students link to their learning

objectives, the ATA requires candidates to make the connections explicitly using

the lesson plans that accompany Activity 2 and the assessment rubrics and analysis

that must be included in Activity 4. For this section, in assessing the understandings

of percentages that her primary students developed through her unit of work, Cara

makes the following connections and cites evidence to support her claims:

The criteria used to measure student learning appear as competencies in the

assessment tools (see Tables 3.1 and 3.2, overleaf). These competencies are

linked, explicitly or implicitly, to the learning objectives in the lesson plans.

Lesson 1’s objectives, for instance, state that students will ‘‘explain that

fractions are used to deal with parts of things’’ and ‘‘define percentages as

special cases of fractions.’’ These learning objectives are satisfied jointly if the

student demonstrates competence in identifying and representing percentages

on a 10 9 10 grid.

She then provides a summary table of how the whole class progressed against five

criteria. Having designed the assessment tool, Cara also comments on what she has

learned through this:

…By measuring [student] competence at the start of the sequence (Table 3.1)

and at the end (Table 3.2), the tool creates a more detailed picture of each

student’s progress. Another advantage is that the tool allows for a more

nuanced assessment of students’ abilities. Each table has a key linked to a set

Authentically assessing graduate teaching

123

of descriptors. In Table 3.2, for instance, a tick (H) indicates that the student is

competent at the task, ‘D’ indicates developing competence, and so forth. The

use of these symbols provides the teacher with more detailed ‘‘at a glance’’

information. This, in turn, allows for the design of further learning activities

and assessments to promote student learning.

Having outlined its advantages, I should note that I also encountered several

difficulties in using the tool. The first was that I did not leave myself enough

time to accommodate the unexpected findings yielded by the pre-test. […]

Another problem was that I found myself with a significant number of low-

achieving students who had been absent from one or more lessons. I felt it

might be unfair to have them sit the test and that the time might be better spent

giving these students additional coaching. The net result was that not all

students were assessed in the same way and further assessment was required

before a final picture of students’ progress could be given.

Activity 5: Reflecting on teaching and learning

The Graduate Standards require that ‘Teachers reflect on, evaluate and improve

professional knowledge and practice’ (Victorian Institute of Teaching, 2007). In the

sampled ATAs, the depth and breadth of individual reflections, as required by

Activity 5, again varied although the decision to be honest about their own mistakes

was evident in many. For example, Jared, in revisiting the journal he kept during the

5-week practicum, uses his own coding of recurrent comments, e.g. (#plan)

(#assess) as the basis for commenting on how he deals with the continual pressures

of teaching. He says:

Another area of planning (#plan), assessment (#assess) and mathematics

(#math) that was evident in my entries was a tendency to fall back to default

position in times of stress. It is clear for many of my #plan entries that I

consistently had trouble planning mathematics when I was ‘under the pump’.

[…]

‘I started my planning with the learning activities and justified the activity by

‘putting in’ some relevant objectives and curriculum links. Even though I am

trying to be aware of the failures of this approach as soon as I was under

pressure I fell into my old ways. How can I make sure I start with outcomes?’

[Journal, Day 3]

In my first year of teaching I will be constantly under pressure and short of

time. There is a risk that I may revert to my default position and become the

‘perpetual wayfarer’ that Dewey talks of (Churchill et al. 2011). I could fall

into line, simply ticking the boxes and justifying my way through a year. In

order to stay effective I believe I will have to implement some sort of

journaling or constant reflexive practice into my everyday teaching. Even

though I hated writing a journal at the time, I know that the only way I will

notice that I am ‘defaulting’ is to take some time off the hamster wheel and

examining my practice.

A. C. Allard et al.

123

‘Some time off the hamster wheel’: Jared is able to recognize the continual demands

faced by teachers as they work to address the complex social, emotional and

academic learning requirements of their students. He also, in his choice of

metaphor, conveys recognition that the ceaseless round of activity that makes up

day-to-day teaching can only be disrupted by a conscious determination to ‘notice’

what is or is not working and to take responsibility by changing his practices. We

suggest that such comments are insightful and serve as a clearer indicator of Jared’s

readiness to teach than comments made on his supervisor’s practicum report, alone.

Perhaps, the value and strength of the ATA as a more meaningful assessment of

teacher readiness is best demonstrated when it is viewed as a ‘whole text’, a 6,000

word written and visual explication of what the preservice teacher does and does not

know, think, and has learned. For example, Erin describes how she designed a

lesson that was much too challenging for many students. She says of this:

This was a great learning curve for me—it forced me to think about the current

abilities and ways of learning of each of my students and how I could provide

them with effective learning experiences. It hit home that I couldn’t expect to

provide the same task to 24 children and for it to work for all of them.

Another realisation was that behaviour problems are often a result of students not

able to engage with the lesson activities because her planning and choice of

activities were not pitched at a level or presented in such a way that made them

accessible by various students:

Through working one-on-one with individual students, I discovered that those

‘at the bottom’ were playing up to get attention, because they didn’t

understand. The realization struck me that students behave the way they do for

a reason—they don’t misbehave for the sake of it, there is always some

underlying issue… I was so concerned about ‘classroom management’ but

realized that you can’t manage people. It’s about getting to really know my

students, building relationships based on trust that will allow me to understand

my student’s needs.’

By reviewing her daily journals, she pinpoints the time when she was starting to see:

‘…why they do the things that they do. I see this as a real turning point—a

shift in my thinking about how to create a classroom of learners that works,

what I need to do to design learning experiences that result in learning for

everyone, and therefore whole class engagement. … I notice at the start of my

journal that I am constantly discussing and worrying about behaviour issues. It

is positive to notice these worries diminish as I move through my placement,

and journal musings became more focused on meeting the individual needs of

my students as a way to enhance learning and engagement, thereby

minimizing behavior issues. I have started to look at this problem from a

different angle’.

In her choice of language, we glimpse the learning that is taking place as Erin

makes sense of her experiences; the metaphors she uses (‘a great learning curve’, ‘I

discovered’, ‘the realization struck me’, ‘a real turning point’, ‘a shift in my

Authentically assessing graduate teaching

123

thinking’, ‘I started to look…from a different angle’) indicate the dynamic changes

that occurred in her knowledge and skills as she prepared for, taught, assessed and

reflected on her students’ learning. She uses the language of movement to capture

the gains she believes she has made, the insights she has now developed. Such

language, together with supporting evidence, we suggest, are better exemplars of the

act of becoming a quality teacher, of the journey towards developing a professional

identity, than can be found in a list of Graduate Standards or in the informed (but

summary-only) judgments of supervising teachers.

Conclusion

In this paper, we have argued that by completing the Authentic Teacher Assessment,

the preservice teachers were positioned differently and more powerfully in relation

to demonstrating their knowledge, skills and readiness to teach. Instead of relying

on the others’ judgments to determine their success or failure, preservice teachers

can demonstrate their capabilities to do the work of teaching and to honestly assess

themselves. They provided artefacts, reflections and commentaries to support their

claims. In doing so, they had to ‘own’ their achievements as well as their failures.

The contingencies of total reliance on another (e.g., supervising teacher or teacher

educator) to determine readiness, someone who may or may not ‘approve’ of the

approach they take, is not part of the ATA. We have aimed to demonstrate how such

an authentic assessment allows the preservice teachers to position themselves in

alternative discourses to that of the neoliberal ‘accountability’ discourse. Such

positioning acknowledges the notion of teacher as ‘life-long learner’ where mistakes

are understood as a means of improving practice. It also foregrounds the teacher as

‘reflective practitioner’ where teaching is viewed as inextricably connected to

thinking about students’ learning in specific contexts instead of as a generic or

technicist operation drawing on a generic one-size-fits-all ‘best practice’ approach.

Moreover, in this framing, ‘ready to teach’ is demonstrated by doing the actual work

of teachers over time in the workplace, and is backed-up with evidence. ‘The

greatest benefits will be secured where multiple measures of learning are combined

with evidence of practice’ (Darling-Hammond 2013, p. 149).

We argue that it is necessary, and well past time, for teacher educators to reclaim

the accountability space, to reassert the right to determine when and how the

preservice teachers enrolled in teacher education courses are ready to teach. We, not

bureaucrats, politicians or registration authorities, are best placed to design capstone

assessment tasks to provide the opportunity for soon-to-be graduates to demonstrate

their professional knowledge and skills, and the associated professional judgment

needed for teaching linked to effective student learning. Such a capstone assessment

task can better capture and convey the requisite knowledge, skills, dispositions

required to establish relationships with children who enter classrooms with a wide

range of economic, cultural, gendered and linguistic needs. There is no ‘best

practice’ but there is ‘better practice’. Moreover, in this way, we provide evidence

of our effectiveness to prepare teachers for teaching in the complex classrooms of

the 21st century. As Pecheone and Chung (2006) suggest, ‘A well conceptualized

A. C. Allard et al.

123

teacher assessment system that incorporates multiple sources of data, including an

assessment of teaching performance, has the potential to provide the evidence

needed to demonstrate the significant contribution of teacher education on teaching

performance and ultimately on student learning’ (p. 34).

However, we are not arguing that the Deakin version of the ATA is the only or

the best version of an authentic teacher assessment, or even that it is, as yet, rigorous

enough to meet validity and reliability claims for broader adoption. Rather, we have

used the ATA to illustrate both the strength and potential of such a type of authentic

teacher assessment. We believe that there is important work to do by teacher

educators on a national basis, to further develop and rigorously trial an authentic

capstone assessment as a legitimate and far more reliable assessment alternative to

those that are currently being proposed to make judgments about graduate teacher

capability and the value of teacher education, some of which are ‘not particularly

helpful and can be harmful’ (Darling-Hammond 2013, p. 148). An effective teacher

evaluation system should be ‘based on professional teaching standards’ and ‘include

multifaceted evidence of teacher practice, student learning, and professional

contributions that are considered in an integrated way’ (Darling-Hammond 2013,

p. 153). We argue that a national research project is needed urgently, a project led

by teacher education researchers from across the country to develop, trial and

evaluate authentic teacher assessments as legitimate means by which the

effectiveness of graduating teachers and therefore the value of teacher education

is recognized. This work would occur in newly created hybrid spaces for teacher

education ‘that bring together school and university-based teacher educators and

practitioner and academic knowledge in new ways to enhance the learning of

prospective teachers’ (Zeichner 2010, p. 92).

As the neo-liberal discourses circulate through teacher preparation, it is vital that

educators assert their knowledge and skills to ensure that our profession continues to

educate graduates who are knowledgeable, skilful, compassionate, articulate,

creative and thoughtful. As Donna Wiseman reminds us:

The public and political rhetoric will continue, and it is safe to say that during

the coming years, teacher educators must be prepared to participate in the

debates in an informed and reasoned manner. It will be up to us to contribute

scholarly solutions to the policy questions and issues.

(Wiseman 2012, p. 90)

We invite other teacher educators to join with us in speaking back to the

dominant and dominating neo-liberal discourses of standards and standardization.

References

Allard, A., & Santoro, N. (2008). Experienced teachers’ perspectives on cultural and social class

diversity: Which differences matter? Equity and Excellence in Education, 41(2), 200–214.

Australian Institute of Teaching and School Leadership. (2011a). Accreditation of initial teacher

education programs in Australia: Standards and Procedures. Carlton, Victoria: Ministerial Council

for Education, Early Childhood Development and Youth Affairs (MCEECDYA).

Authentically assessing graduate teaching

123

Australian Institute of Teaching and School Leadership. (2011b). National Professional Standard for

Principals. Carlton, VIC: Ministerial Council for Education, Early Childhood Development and

Youth Affairs (MCEECDYA).

Australian Institute of Teaching and School Leadership. (2011c). National Professional Standards for

Teachers. Carlton, VIC: Ministerial Council for Education, Early Childhood Development and

Youth Affairs (MCEECDYA).

Bullough, R. V, Jr. (2012). Against best practice: uncertainty, outliers and local studies in educational

research. Journal of Education for Teaching, 38(3), 343–357.

Cochran-Smith, M., & Fries, M. (2005). Researching teacher education in changing times: Politics and

paradigms. In M. Cochran-Smith & K. Zeichner (Eds.), Studying teacher education: The report of

the AERA panel on research and teacher education. Mahwah, NJ: Lawrence Erlbaum.

Connell, R. W. (2009). Good teachers on dangerous ground: towards a new view of teacher quality and

professionalism. Critical Studies in Education, 50(3), 213–229.

Courneya, C., Pratt, D., & Collins, J. (2008). Through what perspective do we judge the teaching of

peers? Perspectives on teaching. Teaching and Teacher Education, 24, 69–79.

Darling-Hammond, L. (2006). Assessing teacher education: The usefulness of multiple measures for

assessing teacher outcomes. Journal of Teacher Education, 57(2), 120–138.

Darling-Hammond, L. (2013). Getting teacher evaluation right: What really matters for effectiveness and

improvement. New York: Teachers College Press.

Darling-Hammond, L., & Snyder, J. (2000). Authentic assessment of teaching in context. Teaching and

Teacher Education, 16(5–6), 523–545.

Deakin University. (2012). Deakin authentic teacher assessment (ATA) handbook: Master of Teaching

EPR 703 reflecting on practice in professional experience. Burwood: Faculty of Arts and Education,

School of Education.

Department of Education and Early Childhood Development. (2012). New directions for school

leadership and the teaching profession. Melbourne VIC: Victorian Department of Education and

Early Childhood Development (DEECD).

Dixon, M., Mayer, D., Gallant, A., & Allard, A. (2011). Authentically assessing beginning teaching:

Professional standards and teacher performance assessment. A project funded by the Victorian

Department of Education and Early Childhood Education and the Victorian Institute of Teaching.

Fairclough, N. (2000). Language and Neo-liberalism. Discourse and Society, 11(7), 147–148.

Fairclough, N., & Wodak, R. (1997). Critical discourse analysis. In T. A. van Dijk (Ed.), Discourse

studies: A multidisciplinary introduction. Discourse as social interaction (Vol. 2). London: Sage

Publications.

Graham, P., & Luke, A. (2013). Critical discourse analysis and political economy of communication:

Understanding the new corporate order. In R. Wodak (Ed.), Critical discourse analysis: Concepts,

history, theory (pp. 103–130). London: Sage Publications.

Kennedy, A., & Doherty, R. (2012). Professionalism and partnership: Panaceas for teacher education in

Scotland? Journal of Education Policy,. doi:10.1080/02680939.2012.682609.

Larsen, M. (2010). Troubling the discourse of teacher centrality: a comparative perspective. Journal of

Education Policy, 25(2), 207–231.

Louden, W. (2008). 101 Damnations: the persistence of criticism and the absence of evidence about

teacher education in Australia. Teachers and teaching: Theory and practice, 14(4), 357–368.

MacLure, M. (2003). Discourse in educational and social research. Buckingham: Open University Press.

Organization for Economic Cooperation & Development (OECD). (2005). Attracting, developing and

retaining effective teachers. Final report—Teachers matter. Paris: OECD Publishing.

Pecheone, R., & Chung, R. (2006). Evidence in teacher education. Journal of Teacher Education, 57(1),

22–36.

Sim, C., Freiberg, J., White, S., Allard, A., Le Cornu, R., & Carter, B. (2012). Using professional

standards: Assessing work integrated learning in initial teacher education. Retrieved from http://

www.teacherevidence.net. Accessed 29 Jan 2013.

St. Maurice, H., & Shaw, P. (2004). Teacher portfolios come of age: A Preliminary Study. NAASP

Bulletin, 88, 15–25.

The State of Queensland (Queensland College of Teachers). (2012). An investigation of best practice in

evidence-based assessment within preservice teacher education programs and other professions.

Brisbane: Queensland College of Teachers.

Townsend, T., & Bates, R. (Eds.). (2007). Handbook of teacher education: Globalization, standards and

professionalism in times of change. Dordrecht: Springer.

A. C. Allard et al.

123

Tuinamuana, K. (2011). Teacher professional standards, accountability, and ideology: Alternative

Discourses. Australian Journal of Teacher Education, 36(12). Article 6.

Victorian Institute of Teaching. (2007). Preparing future teachers: The standards, guidelines and process

for the accreditation of pre-service teacher education courses. Melbourne, Vic: Victorian Institute

of Teaching.

Wiseman, D. (2012). The intersection of policy, reform, and teacher education. Journal of Teacher

Education, 63(2), 87–91.

Zeichner, K. (2010). Rethinking the connections between campus courses and field experiences in

college- and University-based teacher education. Journal of Teacher Education, 61(1–2), 89–99.

Andrea C. Allard is Associate Professor (Honorary) at Deakin University, Australia. She led the design

of the recently introduced Deakin Master of Teaching and was Associate Head of School Teaching and

Learning in the School of Education at Deakin University. She is CI on the ARC project The Studying the

Effectiveness of Teacher Education (SETE) project (2010–2014) which investigates the effectiveness of

teacher education in preparing teachers for the variety of school settings in which they begin their

teaching careers.

Diane Mayer is Professor and Pro Vice-Chancellor Colleges and Distinctive Specialisations at Victoria

University, Australia. She has more than 20 years of experience in leadership positions across a number of

institutions including Deakin University, the University of California at Berkeley, and the University of

Queensland. Diane is an internationally recognized expert in the field of teacher education. She is

currently leading major Australian projects on the effectiveness of teacher education.

Julianne Moss is Associate Professor (Pedagogy and Curriculum) at Deakin University, Australia and

President of the Australian Association for Research in Education, 2013–2014. She is CI on recent major

government tenders (e.g. Evaluation of the national partnership Teacher Quality Supply and Retention -

TQSR; Initiatives for the Victorian School Workforce 2013; Longitudinal Teacher Education Workforce

Study - LTEWS, 2011–2013). As a CI she leads the program of research with 12 project schools,

university associate researchers and school mentors for the ARC Linkage Grant Intercultural

Understanding in Primary and Secondary Schools, 2011–2014.

Authentically assessing graduate teaching

123