valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher...

16
This article was downloaded by: [Simon Fraser University] On: 18 November 2014, At: 18:12 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/nses20 Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter? Stefan Johansson a , Rolf Strietholt b , Monica Rosén a & Eva Myrberg a a Department of Education and Special Education, University of Gothenburg, Gothenburg, Sweden b Technische Universität Dortmund – Institute for School Development Research (IFS), Dortmund, Germany Published online: 05 Jul 2013. To cite this article: Stefan Johansson, Rolf Strietholt, Monica Rosén & Eva Myrberg (2014) Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?, School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, 25:3, 394-407, DOI: 10.1080/09243453.2013.809774 To link to this article: http://dx.doi.org/10.1080/09243453.2013.809774 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,

Upload: eva

Post on 24-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

This article was downloaded by: [Simon Fraser University]On: 18 November 2014, At: 18:12Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

School Effectiveness and SchoolImprovement: An International Journalof Research, Policy and PracticePublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/nses20

Valid inferences of teachers’judgements of pupils’ reading literacy:does formal teacher competencematter?Stefan Johanssona, Rolf Strietholtb, Monica Roséna & Eva Myrberga

a Department of Education and Special Education, University ofGothenburg, Gothenburg, Swedenb Technische Universität Dortmund – Institute for SchoolDevelopment Research (IFS), Dortmund, GermanyPublished online: 05 Jul 2013.

To cite this article: Stefan Johansson, Rolf Strietholt, Monica Rosén & Eva Myrberg (2014) Validinferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competencematter?, School Effectiveness and School Improvement: An International Journal of Research, Policyand Practice, 25:3, 394-407, DOI: 10.1080/09243453.2013.809774

To link to this article: http://dx.doi.org/10.1080/09243453.2013.809774

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,

Page 2: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 3: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

Valid inferences of teachers’ judgements of pupils’ reading literacy:does formal teacher competence matter?

Stefan Johanssona*, Rolf Strietholtb, Monica Roséna and Eva Myrberga

aDepartment of Education and Special Education, University of Gothenburg, Gothenburg, Sweden;bTechnische Universität Dortmund – Institute for School Development Research (IFS), Dortmund,Germany

(Received 29 October 2012; final version received 14 April 2013)

The primary aim of this study was to investigate whether and, if so, how formal teachercompetence relates to the relationship between pupil reading achievement and teachersjudgements’ of pupils’ reading achievement. The data come from the Swedish parti-cipation in the Progress in International Reading Literacy Study (PIRLS) 2001 inGrade 3. Information was obtained from pupils (N = 5271) and teachers (N = 351).Analyses were conducted using multilevel structural equation modeling (SEM) withrandom slopes. Teacher competence was operationalized using multiple observedindicators and defined as a latent variable. A higher correspondence between teacherjudgements and pupil reading achievement for teachers with higher competence wasfound. The results highlight the importance of teacher competence in assessmentpractice.

Keywords: teacher judgement; teacher competence; reading literacy; PIRLS 2001;multilevel structural equation modeling

Introduction

The primary aim of the current study is to investigate whether and, if so, how teachers’formal competence relates to their judgements of pupils’ achievement. More specifically,the study investigates the association between teacher competence and the relationshipbetween teacher judgements and Swedish Grade 3 pupils’ reading test results in asecondary analysis of data from the International Association for the Evaluation ofEducational Achievement (IEA) Progress in International Reading Literacy Study(PIRLS) 2001 (Rosén, Myrberg, & Gustafsson, 2005). In the study, teacher competencewas operationalized by different indicators of teachers’ education and experience.

In addition to the importance of in-depth subject and pedagogical knowledge, compe-tence in assessing learning outcomes is widely regarded as one of the most importantaspects of effective teaching, and crucial for individualized pupil learning processes(Bolhuis, 2003; Hattie, 2009). The correct identification of pupils’ knowledge levelsmakes it possible to tailor teaching to the needs of each individual pupil and to giveadequate feedback (Black & Wiliam, 1998; Hattie & Timperley, 2007).

Teacher judgements and pupils’ test results

The relationship between teacher judgements and pupils’ achievement has been investi-gated in a number of studies. In a meta-analysis of 75 studies, Südkamp, Kaiser, and

*Corresponding author. Email: [email protected]

School Effectiveness and School Improvement, 2014Vol. 25, No. 3, 394–407, http://dx.doi.org/10.1080/09243453.2013.809774

© 2013 Taylor & Francis

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 4: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

Möller (2012) found that the average correlation on the relationship between teachers’judgements and standardized measures was .63. In their review of the literature on therelationship between teachers’ judgements and standardized measures, Hoge andColadarci (1989) showed that the median correlation is .66. The main conclusion ofthese studies is thus that teacher judgements are trustworthy. However, some variabilitybetween teachers’ judgements has been observed. Harlen (2005), who carried out an in-depth review of 30 studies, some of them specifically addressing the correspondencebetween teachers’ summative assessments and pupils’ results on standardized tasks, foundthat teachers’ judgements were sometimes biased and not completely reliable. However,one reason for this may be the limited reliability of test scores that were compared withthe teachers’ judgements (Harlen, 2005).

In a US study of kindergarten to Grade 3, Meisels, Bickel, Nicholson, Yange, andAtkins-Burnett (2001) found a substantial relationship between teacher judgements ofpupils’ reading and mathematics skills and test achievement of pupils. Here, there were nosignificant differences across subject domains. Feinberg and Shapiro (2009), who exam-ined the correspondence between teachers’ judgements of different aspects of pupils’reading, concluded that teachers were good at predicting the relative rank of individualpupils. However, the results indicated that teachers were less able to determine how pupilswould perform against an external criterion. In their analysis of PIRLS 2001 data usingtwo-level structural equation modeling (SEM), Johansson, Myrberg, and Rosén (2012)found teachers’ judgements to vary from one classroom to another even though pupils’achievement did not. However, within classrooms, pupils’ reading achievement – mea-sured in Grade 3 – accounted for about 45% of the variance in teachers’ judgements. Theresults indicate that the teachers’ frames of reference for achievement levels were limitedto pupils’ achievement in their own classroom, and that there was a lack of a commonframe of reference between teachers across different classrooms.

Although rarely studied in previous research, the relationship between teacher judge-ments and pupils’ achievement is likely to be moderated by teacher competence. Forexample, a relevantly educated teacher with more experience ought to be better atidentifying pupils’ knowledge levels. In their analyses of the relationship between tea-chers’ judgements of pupils’ achievement and standardized test scores, Martínez, Stecher,and Borko (2009) included several teacher characteristics. Based on data from the EarlyChildhood Longitudinal Survey in the third and fifth grade and using multilevel modelinganalyses, they concluded that teachers’ judgements varied significantly across classroomsand that pupils’ achievement on tests could not explain much of the variance in thejudgements. Analyses with random slopes (allowing comparisons of classrooms withdifferent relationships between achievement levels and teacher judgements) revealedthat some teachers assessed pupils’ achievement in ways that more closely correspondedwith standardized test scores than others. However, with the use of manifest variables ofteacher competence, such as teachers’ level of education or the possession of an elemen-tary mathematics certificate, no significant effect on the agreement between teachers’judgements and pupils’ test scores was found.

Measuring teacher competence and its effects

Teacher competence is a complex ability developed and shaped over time; teacher trainingpreceding teaching practice. Strong evidence supports that a variety of teachers’ compe-tence factors contribute to the effects teachers have on pupil learning (Hattie, 2009). Nye,Konstantopoulos, and Hedges (2004) showed that differences between pupil achievement

School Effectiveness and School Improvement 395

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 5: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

can be attributed to teacher competence. The teacher competence explained up to 21% ofthe variance in pupil achievement. This result indicates that there are substantial differ-ences in teacher competence.

In recent years, different models of teacher professional development have beendeveloped, and several approaches have been used to measure teacher competence (e.g.,Avalos, 2011; Baumert et al., 2010; Darling-Hammond & Bransford, 2005; Darling-Hammond & Youngs, 2002; Sternberg & Grigorenko, 2003). Several different concep-tualizations of teacher competence have guided previous research. These include, forexample, formal competence (relevant qualification), competence as evaluated by pupils,parents, or principals, and achievement-related competence as measured by pupils’ testresults or grades.

One of the advocates for the achievement-related competence construct is Hanushek(2003), who argued that teacher competence is a personal trait, not easily measured byobservable variables such as teaching credits or qualifications. Neither did Goldhaber andBrewer (1997, 2000) find any strong effects of teachers training. Instead, they suggestedthat teachers with short training are as efficient as teachers with a standard training.However, in an overview, Darling-Hammond (2000) concluded that teaching certificationis an important aspect of teacher competence. In fact, a vast amount of research hassucceeded in showing effects of formal teacher qualifications on student achievement(e.g., Darling-Hammond & Bransford, 2005; Darling-Hammond & Youngs, 2002; Wayne& Youngs, 2003). Darling-Hammond (2000) found that teachers with full certification anda major in the field they were teaching were more efficient than uncertified teachers orcertified teachers without a major. These results held particularly true for reading.

One reason for the contradictory results shown in research on teacher competence maybe a mix-up between the conceptualizations of teacher competence.

Other aspects of the formal teacher competence construct are the content knowledge(CK) and pedagogical content knowledge (PCK). Baumert et al. (2010) separated thesedimensions and showed that both indeed were important but that the effect of teachers’PCK on student achievement was the strongest. In connection to this area of research,Swedish teachers’ CK and PCK seem worthwhile discussing, because, in recent decades,teacher education in Sweden has undergone several reforms (Lindblad, Lundahl,Lindgren, & Zackari 2002; Lindensjö & Lundgren, 2002). This implies that Swedishteachers differ in terms of both CK and PCK, due to their education. In a recent study,Alatalo (2011) examined teachers’ content knowledge in Swedish language structures andbasic spelling rules in an actual test situation. Employing a sample of about 300 primary-school teachers who had substantial variation in education and years of teaching experi-ence, the results showed that primary-school teachers who were awarded their certificationbefore 1988 achieved the best test results. Similarly, Frank’s (2009) findings indicate thatteachers educated before 1988 received more training in basic and remedial reading thanthose with more recent teaching degrees. A conclusion from both these studies can be thatteachers educated more than 2 decades ago have a more appropriate education forteaching reading to younger pupils (Alatalo, 2011; Frank, 2009). In line with the studyof Baumert et al. (2010), these studies point to the importance of both CK and PCK.

Teachers’ experience is also a part of teachers’ formal competence, known to be animportant factor for pupil achievement. Darling-Hammond (2000) and Rivkin, Hanushek,and Kain (2005) have found that teachers with less than 1 to 2 years of experience are lessefficient than their colleagues with 3 or more years of experience. However, the effect ofexperience seems to weaken after about five years. One previous study with the SwedishPIRLS 2001 data found no significant effects on reading achievement of the variable

396 S. Johansson et al.

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 6: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

“teaching experience” as measured by number of years of teaching experience (Myrberg,2007). In this study, the variation in teacher experience was low, which may have causedthe nonsignificant findings.

In summary, although previous research has come to inconsistent conclusions, morerecent research convincingly has demonstrated that effects of formal competence, such asteacher education and experience, can be estimated. In the current study, indicators ofteachers’ formal competence were used to measure teacher competence.

The current study

Although it can be shown that teacher competence can influence pupil achievement, it isnot clear how competence relates to teacher judgement. While some evidence indicatesthat teacher judgement tends to be a valid measure of pupil achievement, it is not clarifiedhow different teacher characteristics mediate this relationship. Also, it is important toconsider that teachers’ ability to judge pupils’ academic abilities correctly is part of theteachers’ competence itself. Since the present study includes a sample of teachers withseveral different teacher trainings, it is possible to investigate whether teachers holding themost relevant formal competence more correctly assess their pupils’ reading achievement.Against this background, this article investigates whether formal teacher competenceinfluences the accuracy of teacher judgement.

Data and method

The Progress in International Reading Literacy Study (PIRLS) is the assessment offourth-grade pupils’ reading literacy regularly carried out by the InternationalAssociation for the Evaluation of Educational Achievement (IEA). The internationaldesign of the study is described in the PIRLS 2001 framework (Campbell, Kelly,Mullis, Martin, & Sainsbury, 2001), as well as in the technical report (Martin, Mullis,& Kennedy, 2003). In 2001, 35 countries participated in the survey. The databasecontains information provided by pupils, their parents, their teachers, and their schoolprincipals. Sweden participated with two samples; one from Grade 3 and one fromGrade 4. In the current study, the data come from the Grade 3 cohort (Rosén et al.,2005). One reason for selecting data from Grade 3 was that 70% of these pupils hadbeen taught by the same teacher throughout their first 3 school years. This was not thecase for most fourth-grade pupils, whose teachers had only known them for a fewmonths. Typically, pupils change teacher after the first 3 years. Thus, the Grade 3samples were suitable for studying the effects of the teachers and their teaching on thepupils’ achievement. In the sample used in the current study, 351 teachers and 5271pupils were included. Stratified cluster sampling was used for this nationally repre-sentative population of third graders (Rosén et al., 2005).

Variables

Variables for pupil achievement, teacher judgements of pupils’ reading skills, teachereducation, and information about the school were selected. Because the data encompassmultiple indicators for teacher judgements and teacher education, it is possible to modelthese constructs as latent variables.

School Effectiveness and School Improvement 397

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 7: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

Teacher judgements

The Swedish database from PIRLS 2001 contains variables additional to those included inthe international design, among them a number of items concerning various reading andwriting skills as well as listening and comprehension skills, on which the teachers ratedeach pupil on a scale ranging from 1 to 10 (Rosén et al., 2005). The items were based on anational diagnostic observation scheme for Grades 2 to 5 provided by the SwedishNational Agency for Education (2002). Table 1 provides a description of the statementsused in the current study. Note that although in Sweden teachers do not grade pupils’achievement with school marks in Grade 3, they nevertheless make summative judge-ments via written comments to inform every pupil and his/her parents.

The 12 items used in the analyses cover reading and writing skills. The purpose was tocapture the teachers’ judgements of the reading domain. Furthermore, the statementsaddress important aspects of reading literacy. To a certain degree, the PIRLS test alsoinvolves writing in that approximately one third of the items are open-ended responsequestions. The Swedish National Agency for Education (2006) found the PIRLS 2001 testto be well aligned with the overall aims of the Swedish Language subject, and thus thejudgement variables and the test should measure the same or a similar construct. As canbe seen in Table 1, most items are well above the midpoint of the scale, indicating thatteachers consider pupils’ reading and writing skills to be fairly good.

An item-parceling approach was used when Teacher Judgements was modeled.Parceling means averaging or summing raw items and using the resulting score as afactor indicator (Sterba & MacCallum, 2010). There are pros and cons with parceling.Bandalos (2008) suggested caution in the use of parceling when the sets of indicatorsbeing parceled are not strictly unidimensional. Otherwise, the use of parceling can resultin high levels of bias in estimates of structural parameters as well as high Type II errorrates. However, parceling may be adopted when items are rather unidimensional and whenthe researcher is interested in a global construct – as opposed to the properties of singleitems (Little, Cunningham, Shahar & Widaman, 2002). Further, item-level data are oftenassociated with lower reliability and lower communality (Hau & Marsh, 2004). Thereby,the measurement properties of parcels may be better. Johansson et al. (2012) used thesame 12 teacher rating items that are used in the current study as indicators of a latent

Table 1. Descriptive statistics for the 12 items of the teacher assessment scale.

Question/Statement

Variable Pupil can… N Mean SD

01 Construct sentences correctly 5208 7.67 2.1602 Recognize frequently used words in an unknown text 5213 8.35 1.9303 Connect a told story with an experience 5162 8.26 1.8504 Use the context to understand a written text 5207 8.05 2.0505 Write a text continuously fluently 5209 7.84 2.1806 Understand the meaning of a text when reading 5124 8.30 2.0007 Recognize the letter/connect sound 5136 9.48 1.2708 Read unknown words 5133 8.11 2.0309 Reflect on a written story 5083 8.09 1.9010 Read fluently 5135 8.32 2.1011 Improve own written text 5072 7.11 2.2412 Use a reasonably large vocabulary 5132 8.30 1.89

398 S. Johansson et al.

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 8: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

teacher judgement variable. The model indicated unidimensionality, but the fit obtainedfor the measurement model was too poor due to low variance and skewed distributions.When instead the items were parceled into four indicator variables, a much better fit wasobtained. Thus, the same item parceling approach has been used here.

Teacher competence

To define the latent variable Teacher Competence (TchComp), four variables wereused. The first variable was the teacher education variable Adequate Education(Adeq_Ed), indicating type of teacher education. As Swedish teacher education hasbeen reformed several times, the sample of teachers in the present study had a slightlydifferent education. For example, the various teacher training programs focused ondifferent grades (1–3, 1–7, 4–9), which probably meant that the subject contentdiffered to some extent. The emphasis on teaching early reading was also differentamong the teachers in the sample. Almost all teachers held a relevant education;within this group, some teachers had a more adequate education than others. Thevariable was coded from 1 to 6, where 6 defined the most relevant teacher educationavailable for teaching young children to read. The second variable included in theteacher competence construct is teaching experience (Tch_Exp), which is a continuousvariable; here, teachers were asked to indicate the total number of years taught inGrade 3, the range being from 1 to 37 years. The third indicator (Tch_Cert) includedis a dichotomous variable indicating teaching certification (1 = No, 2 = Yes). It isworth noting that most teachers in the study were certified, which can be seen in Table2, where the variables are described in more detail (M = 1.94). The final variabledenotes whether reading pedagogy had been a central component of teachers’ educa-tion, and it ranges from 1 to 3, where 3 indicates a substantial educational focus(Read_Ped).

Figure 1 provides the estimates from a two-level confirmatory factor analysis with thelatent variable Teacher judgementW at the individual level along with the two latentvariables Teacher judgementB and Teacher competence at the class level.

The measurement model for the latent variables fitted the data very well. The factorloadings for the judgement variables are very high and even. The indicators of teachercompetence obtain slightly lower but still substantial loadings.

Table 2. The manifest variables used in the current study and descriptive statistics.

Descriptive statistics

Variable Label N M SD

Test score Pupils’ total reading score at the PIRLS 2001 test 5271 523.6 72.61

Adeq_Ed Type of formal training (1–6) 331 4.36 1.166Tch_Exp How many years of teaching experience have you got in

Grade 3? (1–37)334 6.90 5.98

Tch_Cert Do you have a teaching certificate? (1–2) (No–Yes) 340 1.94 0.23Read_Ped To what extent did you study reading pedagogy in your

formal training? (1–3) (Low-Medium-High)332 2.61 0.60

Note: Adeq_Ed = Adequate Education, Tch_Exp = Teaching experience, Tch_Cert = Teaching certificate,Read_Ped = Reading pedagogy.

School Effectiveness and School Improvement 399

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 9: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

Test scores

A total reading score on the PIRLS reading test was used as an indicator of pupils’ readingachievement. The average score in 2001 for the Swedish Grade 3 pupils was 523 points(SD = 73 points), which is significantly higher than the international mean of 500 points(Mullis, Martin, Gonzalez, & Kennedy, 2003). We calculate the average class achieve-ment (AvgTest) as a measure of the performance level of the respective classroom.

The proportion of missing amounts to a few percentages for most variables. However,in order to take advantage of all available information, that is, not to remove cases withincomplete data, the full information maximum likelihood (FIML) estimator in Mplus wasused (L. K. Muthén & Muthén, 1997–2011).

Methods of analysis

A common sampling strategy when collecting educational data is to sample individualswithin hierarchical structures (e.g., classrooms and schools). Due to the dependenciesbetween individuals belonging to the same class or cluster, and since the individual levelinfluences the classroom level and vice versa, suitable methods need to be employed(Creemers & Kyriakides, 2006). If the assumption of independence is violated, incorrect

PARCEL1 PARCEL2 PARCEL3 PARCEL4

TeacherJudgementW

.94 .97 .93.94

PARCEL1 PARCEL2 PARCEL3 PARCEL4

Between

Within

.96 .96.92 .96

TeacherJudgementB

Tch_CertTch_ExpAdeq_Ed

TeacherCompetence

.67 .43 .43 .64

Ped_Read

Figure 1. Two-level measurement model of teacher judgement and teacher competence.Note: N = 5271. All presented factor loadings are significant p < .001. Model fit: χ2 = 206.14, df = 4,RMSEA =.04, CFI =.98, SRMRw =.01, SRMRb =.09.

400 S. Johansson et al.

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 10: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

standard errors will be produced, thus resulting in spuriously significant findings (Hox,2002). In order to account for the hierarchical structure in the data, the variance can bedecomposed into within- and between-class components. Multilevel modeling softwaresuch as Mplus allows decomposing this variance. In the current analyses, Mplus 6.1(L. K. Muthén & Muthén, 1997–2011) was used under the STREAMS (StructuralEquation Modeling Made Simple; Gustafsson & Stahl, 2005) environment. STREAMSis a user-friendly software that provides pre- and post-processors to the Mplus program,thus making it easy to set up, estimate, and interpret SEM models. The intra-classcoefficient provides information about the proportion of variance that is attributed to thebetween level. A rule of thumb is that if this share is 5% or higher, two-level modelingmay be warranted (B. O. Muthén, 1994). If the intra-class coefficient of a variable is zero,the variable can be studied at the individual level only. One such variable may be gender,assuming that approximately the same number of boys and girls are present in each of theclusters/classes.

In two-level modeling, covariates which are specified as between-level variables (suchas teaching experience) are usually related to between-level outcome variables. However,in order to examine if a within-class relation between two variables varies across class-rooms, multilevel models with random slopes can be used (Brown, 2006; Hox, 2002). Thewithin-level relationship between teachers’ judgements and pupils’ reading achievement isassumed to vary; thus, the random slope becomes a new variable with a mean and avariance. In this way, it is possible to relate variables at the teacher level to the slope andthereby investigate effects of the teacher on relationships within classrooms (see also,Hox, 2002). Multilevel models using random slopes are often presented with equations tocommunicate the exact specifications of the models in a comprehensible way (McCoach,2010).

When mediating variables are introduced, the within relationship, as expressed by theslope coefficient, is a dependent variable at the between level. For example, if the slope isthe relationship between teachers’ judgements and pupils’ achievement, the explanatoryvariable may be the amount of teaching experience. If teaching experience is positivelyrelated to the slope, the pupils’ achievement effect on the judgements will be larger for themore experienced teachers. This implies that, for more experienced teachers, there will bea higher agreement between their judgements and pupils’ achievement. Conversely, if theeffect of experience is negative, the achievement effect on judgement will be smaller forthe more experienced teachers.

Analytical stages

A series of two-level, structural equation models was estimated to investigate first therelation between pupils’ achievement and teacher judgement and, second, the role ofteacher competence for this relation. The first step of the analyses was to estimate thewithin-class variance of the latent variable JudgementW. The first factor loading was fixedto 1 in order to identify the model and receive an intercept that is equal to the mean of thefirst item parcel. As the current study focuses on the relation between judgements and testscores at an individual level, the latent variable teacher judgement is modeled only at theindividual level. To ascertain whether the test scores explain variance in teacher judge-ments (JudgementW), the teacher judgement of pupil i in class c was regressed on his orher PIRLS test score (Test). In Equation (1), the parameter β1c is an estimate for thecorrespondence between pupils’ achievement and teacher judgement, β0c is the intercept,and eic is the residual or error term:

School Effectiveness and School Improvement 401

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 11: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

JudgementWic ¼ β0c þ β1cTestic þ eic (1)

To investigate whether the parameters β1c vary between classrooms or teachers, we addeda random part to the previous model by setting up Equation (2) on the classroom level:

β1c ¼ γ10 þ u1 (2)

Basically, the equation introduces a variance component at the classroom level to theprevious equation in that u1 is a classroom-specific error term that can have variance. Theuse of a random slope hinges on the fact that, in a statistically significant manner, thevariance differs from zero because otherwise explanatory between-level variables – that is,teacher competence – cannot explain any variation in the way teachers assess their pupils’knowledge and skills. In the final step of the analyses, we introduced an explanatoryvariable to Equation (2) and regressed the random slope β1c on TchComp in class c:

β1c ¼ γ10 þ γ11TchCompc þ u1 (3)

Thus, in Equation (1) β1c is replaced by the terms on the right-hand side of Equation (3):

JudgementWic ¼ β0c þ γ10Testic þ γ11Testc � TchCompc þ u1 � Testic þ eic (4)

The parameter of interest is γ11, which is an estimate for the interaction effect between thetest score and teacher competences. In other words, we examine whether differences in thecorrespondence between judgements and test results are related to the latent “teachercompetence” variable.

Results

The estimate of the variance of the latent variable JudgementW provides a baseline for thefurther analyses (σ2e = 28.502, see Table 3, Column 1). In order to discover the within-classroom relationship between judgements and test scores and whether it varies acrossclassrooms, we regressed JudgementW on pupils’ test achievement (Test) and modeled thisparameter as a random slope. The results from this analysis are presented in Column 2.The parameter for Test is 4.451 (p < 0.01), meaning that pupils who receive highjudgements form their teacher perform better on the achievement tests. The residualvariance decreases to a value of σ2e = 13.971, which means that the variable Test explainsroughly 50% of the variance in the teacher judgements. The estimate for the variance ofthis random slope parameter is σ2u1 = 0.332, and its difference from zero is statisticallysignificant (p < 0.01), indicating that the agreement between test scores and teacherjudgements is not the same across all classrooms.

In Model 3, we introduced another explanatory variable to the previous model,regressing the random slope on teacher competence. In other words, this model testswhether the variable TchComp influences the strength of a relationship between Test andJudgementW (see Equation (4)). The model parameter for this interaction isTest*TchComp, and the analyses reveal a moderate but nevertheless statistically significantpositive effect of 0.081 (p < 0.05). In other words, the correspondence between teachers’judgements and pupils’ achievement increases for teachers with higher competence levels.The estimates of the analyses are displayed in Column 3.

402 S. Johansson et al.

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 12: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

Since the study employs observational data, the possibility of confounding variablesneeds to be considered. More specifically, highly competent teachers might select them-selves into privileged classes with high-performing pupils. We thus added the averageachievement in the classes as a control variable to the previous model and regressed therandom slope not only on TchComp but also on the average test score in class c (AvgTest).This further analysis does not challenge the results we presented in the previous section.Indeed, the parameter for Test*TchComp even increases slightly to 0.103 and remainsstatistically significant (p <.01) (see Column 4).

What is interesting is that the parameter for the interaction effect Test*AvgTest isnegative (–0.407, p < 0.01), which indicates that the correspondence between teacherjudgements and test scores is lower in high-performing classes. Although the analysis ofthis relation is beyond the scope of the current study, it is undeniably an interesting result.One possible explanation for this may be a ceiling effect on the judgement scale. All inall, teachers rate their pupils fairly highly on the 12 judgement items so that the responsescale may be slightly better in distinguishing between pupils in less well-performingclasses. We tested this hypothesis by removing the 5% worst performing classes from oursample and re-estimating the model. The results from this analysis support our hypothesisin that the parameter for Test*TchComp (0.107, p <0.01) remained largely unchanged,while the parameter for Test*AvgTest became no longer significant (not in the table).

Discussion and conclusions

The main purpose of the current study was to investigate the degree to which formalteacher competence was related to teacher judgement accuracy. Multilevel SEM, includ-ing random slopes, was used to investigate influences from the between-classroom levelon within-classroom relationships, an approach that, so far, is relatively uncommon ineducational research (Hox, 2002).

It has previously been difficult to obtain empirical evidence for that teachers’characteristics would be associated to the accuracy of their judgements. Studies onthis topic are few, and results tend to be nonsignificant. Martínez et al. (2009), forexample, related indicators of formal teacher competence to the relationship betweenteacher judgement and pupil achievement with no statistically significant findings. Incontrast to many other studies on the effect of formal teacher competence (e.g.,Darling-Hammond, 2000; Darling-Hammond & Youngs, 2002; Wayne & Youngs,2003), the present study used a latent variable approach, where the errors of

Table 3. Multilevel models of factors that influence teacher judgement.

Model no. (1) (2) (3) (4)

Intercept 23.644** (0.166) 0.363 (0.635) 0.389 (0.635) 0.186 (0.647)Test – 4.451** (0.111) 4.445** (0.111) 6.608** (0.617)Test*TchComp – – 0.081* (0.038) 0.103** (0.038)Test*AvgTest – – – –0.407**(0.112)σ2e 28.502**(1.254) 13.971** (0.584) 13.973** (0.585) 13.967** (0.584)σ2u1 0.332** (0.070) 0.326** (0.069) 0.310** (0.068)

Note: Test = Pupil achievement on PIRLS, AvgTest = Classroom average achievement on PIRLS,JudgementW = Teacher Judgement within classrooms, TchComp = Formal teacher competence, σ2e = Residualvariance, σ2u1 = Slope variance. Estimates are presented as unstandardized coefficients with standard errors inparentheses. Significance levels: **1%, *5%.

School Effectiveness and School Improvement 403

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 13: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

measurement are controlled for, to define formal teacher competence. The findingsindicated, furthermore, that this variable – formal teacher competence – was positivelyrelated to the correspondence between teacher judgements and pupils’ test results. Thisresult points out that competence is associated with the validity of teacher judgements.When teachers accurately judge their pupils’ achievements, they can adapt theirteaching to the knowledge levels of individual pupils (Helmke & Schrader, 1987).Moreover, the accurate identification of pupils’ needs is crucial for effective feedback(Hattie, 2009). If teachers judge two pupils with similar knowledge levels differently,pupils’ learning conditions can be affected in the sense that some may receive betteradjusted feedback than others. Furthermore, in Sweden, every pupil is expected toreceive equal educational quality, and if teachers have different frames of referencesfor judging pupils’ performance, equity and equality can be jeopardized.

In the current study, pupils’ test results explain about 50% of the variance in teachers’judgements at the within-classroom level, a finding that matches the estimates fromsimilar studies conducted in the US and Australia (Hoge & Coladarci, 1989; Kenny &Chekaluk, 1993; Meisels et al., 2001). Researchers who have found correlations of asimilar magnitude consider that the validity of teachers’ own judgements is high, and thatteachers’ judgements are trustworthy measures of pupils’ achievement. The results of thecurrent study confirm that the distribution of teachers’ judgements were well aligned withthe class rank. It should, however, be noted that about 50% of the variance in teacherjudgements within classes is still unaccounted for. Although the unexplained share ofvariance is large, it would be unrealistic to expect that the total variance in teachers’judgements could be explained by a single reading test. Thus, it should rather be seen as asign that the school subject “Swedish” is broader and more complex than the skillsmeasured by the PIRLS test.

Another explanation may be that some teachers, especially those of low-performingpupils, give higher judgements than the pupils’ achievement might actually justify. Largevariation was found in the judgement data for low-performing classes, indicating thatsome pupils received high (e.g., 8–10) judgements even though their performances wereequal to lower rated achievement (e.g., 5–7) in higher performing classrooms. Onepossible limitation of the current study is that the scale points of the 10-point scalewere not attached to explicit criteria. Even though the statements were linked toSwedish syllabi, teachers did not know how teachers in other classrooms rated theirpupils. Consequently, teachers in different classrooms could have interpreted the scalepoints differently, using their own pupils’ performances as a reference for their judge-ments. On the other hand, this lack of well-defined reference points for making judge-ments of the level of different aspects of some skills comes with the trade. Teachers veryseldom have commonly agreed and well-defined reference points when they make theirjudgements.

Another result of the present study was that for teachers in the low-performing classesthe correspondence between their judgements and pupils’ reading achievement washigher. As mentioned in the results section, one possible explanation is related to thejudgement scale used. By having a larger dispersion in their judgements and pupils’results, teachers in low-performing classes had other conditions for ranking the order ofpupils’ achievement. The low variation in some of the items in the high-performingclasses appears quite reasonable, given that most pupils would receive a 10 on thestatement, “the pupil can read fluently”, and on other similar statements aimed at themore elementary levels of reading achievement.

404 S. Johansson et al.

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 14: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

Some researchers have suggested that teachers’ moderation practices with respect toassessment may have a positive influence on teacher assessment competence in that theycould improve the consistency between teachers’ judgements (Harlen, 2005; Maxwell,2004). It would, therefore, be of great interest to examine how teachers’ moderationpractices relate to the consistency of teachers’ judgements in relation to a large-scalestudy. One challenge in any future research would be how to operationalize teachercollaboration practices and how to separate this potential effect from other influences.Further multilevel multivariate research is, therefore, needed to shed more light on thisissue.

Finally, the implications of the results in the present study are of importance for policymakers interested in facilitating learning and equality in education. Results that show theimportance of high and accurate teacher competence for valid judgements ought to inspirefurther in-depth studies focusing on how, in practice, teacher competence relates tojudgements.

Notes on contributorsStefan Johansson is a researcher and lecturer at the Department of Education and Special Educationat the University of Gothenburg, Sweden. In his dissertation project, he used large-scale data toinvestigate different aspects of teacher competence and its effects on pupils’ reading literacy.

Rolf Strietholt is a PhD student in Educational science at the Institute for School DevelopmentResearch at Technische Universität Dortmund. His research focuses on educational capabilities,international comparative studies, and methodology issues involved in measuring trends in pupilachievement.

Monica Rosén is Professor of Education at the University of Gothenburg, Sweden. Since 1990, shehas worked on large-scale national and international assessments of schools in a range of researchprojects. Her research has mainly focused on studies of reading literacy, international comparisonsand change across time, issues of educational policy and equity, and on methodological issuesinvolved in the analysis of educational assessment data.

Eva Myrberg is a senior lecturer and researcher at the Department of Education and SpecialEducation at the University of Gothenburg, Sweden. She has studied the effects of educationalresources on educational achievement and has a special interest in the influence of different aspectsof teacher competence.

ReferencesAlatalo, T. (2011). Skicklig läs- och skrivundervisning i åk 1–3. Om lärares möjligheter och hinder

[Proficient teaching of reading and writing in Grades 1–3. About teachers’ opportunities andbarriers] (Doctoral dissertation, University of Gothenburg, Göteborg, Sweden). Retrieved fromhttp://hdl.handle.net/2077/25658

Avalos, B. (2011). Teacher professional development in Teaching and Teacher Education over tenyears. Teaching and Teacher Education, 27, 10–20.

Bandalos, D. L. (2008). Is parceling really necessary? A comparison of results from item parcelingand categorical variable methodology. Structural Equation Modeling, 15, 211–240.

Baumert, J., Kunter, M., Blum, W., Brunner, M., Voss, T., Jordan, A., . . .Tsai, Y.-M. (2010).Teachers’ mathematical knowledge, cognitive activation in the classroom, and pupil progress.American Educational Research Journal, 47, 133–180. doi:10.3102/0002831209345157

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education:Principles, Policy & Practice, 5, 7–73.

Bolhuis, S. (2003). Towards process-oriented teaching for self-directed lifelong learning: A multi-dimensional perspective. Learning and Instruction, 13, 327–347.

School Effectiveness and School Improvement 405

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 15: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York, NY: TheGuilford Press.

Campbell, J. R., Kelly, D. L., Mullis, I. V. S., Martin, M. O., & Sainsbury, M. (2001). Frameworkand specifications for PIRLS assessment 2001 (2nd ed.). Chestnut Hill, MA: Boston College.

Creemers, B. P. M., & Kyriakides, L. (2006). Critical analysis of the current approaches tomodelling educational effectiveness: The importance of establishing a dynamic model. SchoolEffectiveness and School Improvement, 17, 347–366. doi:10.1080/09243450600697242

Darling-Hammond, L. (2000). Teacher quality and student achievement: A review of state policyevidence. Education Policy Analysis Archives, 8(1), 1–44.

Darling-Hammond, L., & Bransford, J. (2005). Preparing teachers for a changing world: Whatteachers should learn and be able to do. San Francisco, CA: Jossey-Bass.

Darling-Hammond, L., & Youngs, P. (2002). Defining “highly qualified teachers”: What does“scientifically-based research” actually tell us? Educational Researcher, 31(9), 13–25.

Feinberg, A. B., & Shapiro, E. S. (2009). Teacher accuracy: An examination of teacher-basedjudgments of students’ reading with differing achievement levels. The Journal of EducationalResearch, 102, 453–462.

Frank, E. (2009). Läsförmågan bland 9-10-åringar. Betydelsen av skolklimat, hem- och skolsam-verkan, lärarkompetens och elevers hembakgrund [Reading skills among 9–10-year-olds. Theimportance of school climate, collaboration between school and home, teacher competence andpupils’ home background] (Doctoral Thesis, University of Gothenburg, Göteborg, Sweden).Retrieved from http://hdl.handle.net/2077/20083

Goldhaber, D. D., & Brewer, D. J. (1997). Why don’t schools and teachers seem to matter?Assessing the impact of unobservables on educational productivity. The Journal of HumanResources, 32, 505–523.

Goldhaber, D. D., & Brewer, D. J. (2000). Does teacher certification matter? High school teachercertification status and pupil achievement. Educational Evaluation and Policy Analysis, 22,129–145.

Gustafsson, J.-E., & Stahl, P.A. (2005). STREAMS user’s guide, Version 3.0 for Windows 95/98/NT.Mölndal, Sweden: MultivariateWare.

Hanushek, E. A. (2003). The failure of input-based schooling policies. The Economic Journal, 113,F64–F98.

Harlen, W. (2005). Trusting teachers’ judgement: Research evidence of the reliability and validity ofteachers’ assessment used for summative purposes. Research Papers in Education, 20, 245–270.

Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement.New York, NY: Routledge.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77,81–112.

Hau, K.-T., & Marsh, H. W. (2004). The use of item parcels in structural equation modelling: Non-normal data and small sample sizes. British Journal of Mathematical and Statistical Psychology,57, 327–351. doi:10.1111/j.2044-8317.2004.tb00142.x

Helmke, A., & Schrader, F.-W. (1987). Interactional effects of instructional quality and teacherjudgement accuracy on achievement. Teaching and Teacher Education, 3, 91–98.

Hoge, R. D., & Coladarci, T. (1989). Teacher-based judgments of academic achievement: A reviewof literature. Review of Educational Research, 59, 297–313.

Hox, J. (2002). Multilevel analysis – Techniques and applications. Mahwah, NJ: Lawrence ErlbaumAssociates.

Johansson, S., Myrberg, E., & Rosén, M. (2012) Teachers and tests: Assessing pupils’ readingachievement in primary schools. Educational Research and Evaluation. 18, 693–711.doi:10.1080/13803611.2012.718491

Kenny, D. T., & Chekaluk, E. (1993). Early reading performance: A comparison of teacher-basedand test-based assessments. Journal of Learning Disabilities, 26, 227–236.

Lindblad, S., Lundahl, L., Lindgren, J., & Zackari, G. (2002). Educating for the new Sweden?Scandinavian Journal of Educational Research, 46, 283–303.

Lindensjö, B., & Lundgren, U. P. (2002). Utbildningsreformer och politisk styrning [Educationalreforms and political governance]. Stockholm, Sweden: HLS Förlag.

Little, T. D., Cunningham, W. A., Shahar, G., & Widaman, K. F. (2002). To parcel or not to parcel:Exploring the question, weighing the merits. Structural Equation Modeling, 9, 151–173.

406 S. Johansson et al.

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014

Page 16: Valid inferences of teachers’ judgements of pupils’ reading literacy: does formal teacher competence matter?

Martin, M. O., Mullis, I. V. S., & Kennedy, A. M. (2003). PIRLS 2001 technical report. ChestnutHill, MA: Boston College.

Martínez, J. F., Stecher, B., & Borko, H. (2009). Classroom assessment practices, teacher judgments,and student achievement in mathematics: Evidence from the ECLS. Educational Assessment,14, 78–102.

Maxwell, G. (2004, March). Progressive assessment for learning and certification: Some lessonsfrom school-based assessment in Queensland. Paper presented at The Third Conference of theAssociation of Commonwealth Examination and Assessment Boards, Nadi, Fiji.

McCoach, D. B. (2010). Hierarchical linear modeling. In G. R. Hancock & R. O. Mueller (Eds.),The reviewer’s guide to quantitative methods in the social sciences (pp. 123–140). New York,NY: Routledge.

Meisels, S. J., Bickel, D. D., Nicholson, J., Yange, X., & Atkins-Burnett, S. (2001). Trustingteachers’ judgments: A validity study of a curriculum-embedded performance assessment inKindergarten to Grade 3. American Educational Research Journal, 38, 73–95.

Mullis, I. V. S., Martin, M. O., Gonzalez, E. J., & Kennedy, A. M. (2003). PIRLS 2001 internationalreport: IEA’s Study of Reading Literacy Achievement in Primary Schools. Chestnut Hill, MA:Boston College.

Muthén, B. O. (1994). Multilevel covariance structure analysis. Sociological Methods & Research,22, 376–398.

Muthén, L. K., & Muthén, B. O. (1997–2011). Mplus user’s guide. Los Angeles, CA: Authors.Myrberg, E. (2007). The effect of formal teacher education on reading achievement of 3rd-grade

pupils in public and independent schools in Sweden. Educational Studies, 33, 145–162.Nye, B., Konstantopoulos, S., & Hedges, L. V. (2004). How large are teacher effects? Educational

Evaluation and Policy Analysis, 26, 237–257.Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, schools, and academic achievement.

Econometrica, 73, 417–458.Rosén, M., Myrberg, E., & Gustafsson, J.-E. (2005). Läskompetens i skolår 3 och 4. Nationell

rapport från PIRLS 2001 i Sverige. The IEA Progress in International Reading Literacy Study[Reading literacy in Grades 3 and 4. National report from PIRLS 2001 in Sweden] (Report No.236). Göteborg, Sweden: University of Gothenburg.

Sterba, S. K., & MacCallum, R. C. (2010). Variability in parameter estimates and model fit acrossrepeated allocations of items to parcels. Multivariate Behavioral Research, 45, 322–358.

Sternberg, R. J., & Grigorenko, E. L. (Eds.). (2003). The psychology of abilities, competencies, andexpertise. New York, NY: Cambridge University Press.

Südkamp, A., Kaiser, J., & Möller, J. (2012). Accuracy of teachers’ judgments of students’academic achievement: A meta-analysis. Journal of Educational Psychology, 104, 743–762.

The Swedish National Agency for Education. (2002). Språket lyfter! Diagnosmaterial i svenska ochsvenska som andra språk för åren före skolår 6 [Progression in language! Diagnostic materialfor Swedish and Swedish as second language for Grades 2–5]. Stockholm, Sweden: Author.

The Swedish National Agency for Education (2006). Med fokus på läsförståelse. En analys avskillnader och likheter mellan internationella jämförande studier och nationella kursplaner[Reading literacy in focus. An analysis of differences and similarities between internationalcomparative studies and national syllabuses]. Stockholm, Sweden: Author.

Wayne, A. J., & Youngs, P. (2003). Teacher characteristics and student achievement gains: A review.Review of Educational Research, 73, 89–122.

School Effectiveness and School Improvement 407

Dow

nloa

ded

by [

Sim

on F

rase

r U

nive

rsity

] at

18:

12 1

8 N

ovem

ber

2014