harming not helping: the impact of a canadian standardized writing assessment on ...€¦ ·  ·...

21
Assessing Writing 13 (2008) 180–200 Available online at www.sciencedirect.com Harming not helping: The impact of a Canadian standardized writing assessment on curriculum and pedagogy David H. Slomp Faculty of Education, Department of Secondary Education, 341 Education South, University of Alberta, Edmonton, Alberta, T6G-2G5, Canada Available online 26 November 2008 Abstract Test-based accountability programs are designed to promote improved standards of teaching and learning within the systems of education that they are connected to. Brenan [Brenan, A. L. (2006). Perspectives on the evolution and future of educational measurement. In: Robert (Ed.), Educational measurement (4th ed., pp. 1–16). Westport, CT: Praeger Publishers], however, suggests that little evidence exists to sup- port the claim that these standardized assessment programs are achieving this goal. This study examines a Canadian high-stakes writing assessment’s effect on the teaching of writing in three grade 12 academic English classrooms. Analysis across cases revealed that factors shaping the exam’s impact on teachers’ pedagogical choices include their attitude toward the exam, the pressure they felt from their school com- munities and their years of experience. The study also found that the exam caused teachers to narrow their teaching of writing in relation to processes taught, assignment design, and evaluation criteria utilized. The study concludes that in the cases observed, the exam is having a negative impact on the teaching of writing. © 2008 Published by Elsevier Ltd. Keywords: Writing assessment; Consequential validity evidence; Construct validity; Standardized assessment; Pedagogy; Case study Over the past decade, an increased focus on accountability in education has led to the expansion and intensification of test-based accountability programs in Canada; currently, all provincial governments use achievement tests, graduation exams, or minimum competency tests for accountability purposes. Standardized literacy assessments are the cornerstone of Tel.: +1 780 492 7565. 1075-2935/$ – see front matter © 2008 Published by Elsevier Ltd. doi:10.1016/j.asw.2008.10.004

Upload: vancong

Post on 30-Apr-2018

225 views

Category:

Documents


1 download

TRANSCRIPT

Assessing Writing 13 (2008) 180–200

Available online at www.sciencedirect.com

Harming not helping: The impact of a Canadianstandardized writing assessment on curriculum

and pedagogy

David H. Slomp ∗Faculty of Education, Department of Secondary Education, 341 Education South, University of Alberta,

Edmonton, Alberta, T6G-2G5, Canada

Available online 26 November 2008

Abstract

Test-based accountability programs are designed to promote improved standards of teaching and learningwithin the systems of education that they are connected to. Brenan [Brenan, A. L. (2006). Perspectiveson the evolution and future of educational measurement. In: Robert (Ed.), Educational measurement (4thed., pp. 1–16). Westport, CT: Praeger Publishers], however, suggests that little evidence exists to sup-port the claim that these standardized assessment programs are achieving this goal. This study examinesa Canadian high-stakes writing assessment’s effect on the teaching of writing in three grade 12 academicEnglish classrooms. Analysis across cases revealed that factors shaping the exam’s impact on teachers’pedagogical choices include their attitude toward the exam, the pressure they felt from their school com-munities and their years of experience. The study also found that the exam caused teachers to narrowtheir teaching of writing in relation to processes taught, assignment design, and evaluation criteria utilized.The study concludes that in the cases observed, the exam is having a negative impact on the teaching ofwriting.© 2008 Published by Elsevier Ltd.

Keywords: Writing assessment; Consequential validity evidence; Construct validity; Standardized assessment; Pedagogy;Case study

Over the past decade, an increased focus on accountability in education has led to theexpansion and intensification of test-based accountability programs in Canada; currently, allprovincial governments use achievement tests, graduation exams, or minimum competencytests for accountability purposes. Standardized literacy assessments are the cornerstone of

∗ Tel.: +1 780 492 7565.

1075-2935/$ – see front matter © 2008 Published by Elsevier Ltd.doi:10.1016/j.asw.2008.10.004

D.H. Slomp / Assessing Writing 13 (2008) 180–200 181

these test-based accountability programs; 39 different standardized literacy tests are adminis-tered to Canadian K-12 students each year. A primary goal of these programs is to promoteimprovements in educational quality (Klinger, Deluca, & Miller, 2008). Brenan (2006), how-ever, observes that limited evidence exists to support the claim that test-based accountabilityprograms are helping to improve education. The study reported in this article examines thisissue within the context of a grade 12 standardized writing assessment program in Alberta,Canada.

1. Effects of standardized assessment on teaching and learning

Research into the effects of standardized assessment on teaching and learning has generatedmixed findings. On the one hand, teachers report working harder and focusing more on studentachievement and pedagogical innovation in the context of test-based accountability programs(Koretz and Hamilton, 2006). On the other hand, research shows that negative consequences stem-ming from the use of standardized assessments include the narrowing of curriculum, over relianceon test preparation materials, unethical test preparation practices, unfair use of test results, unin-tended bias against population subgroups, increased tension and frustration in schools, increasedgrade retention, and regression in pedagogical practice (Cheng, Fox, & Zheng, 2007; Fox &Cheng, 2007; Lane, Parke, & Stone, 1998; Smith & Fey, 2000; Stiggins, 1999). Much of thecriticism of standardized testing revolves around its seemingly pervasive power to shape teachingpractice (Meaghan & Casas, 1995).

Recognizing the consequences of assessment for writing pedagogy, composition scholars haveengaged in a long history of critically examining and challenging the validity of many standard-ized writing assessment designs (Hamp-Lyons, 2002; Huot, 2002; Yancey, 1999). Historically,this research has focused on construct and design issues; more recent research has begun exam-ining the consequences stemming from the use of standardized writing exams. Hillocks’ (2002)research on standardized writing assessment programs in seven US states found that each test hereviewed contained design or conceptual flaws that were having a negative effect on the teachingof writing. He concluded that, “in most states, the assessment must be altered if writing andthinking are to flourish” (pp. 205–206). Mabry (1999) found that the rubrics used in standardizedwriting assessment often supplant teacher-designed rubrics which in turn leads to a narrowing ofinstructional focus. Crawford and Smolkowski (2008) found that assessment design influencedpedagogical focus. Based on their findings, they argued that “if we do not value [a rich recursivewriting process] in our state assessments, we will not see this kind of instruction in our class-rooms” (75). While Scott (2008) found that even portfolio assessments—often promoted as anassessment tool that can support effective pedagogy—can control, narrow, and constrain teachingand learning.

In the Canadian context, research on the consequences of writing assessment on systems ofeducation focuses mostly on the Ontario Secondary School Literacy Test. Pinto, Boler, and Norris(2007) argue that the OSSLT is based on a limited theory of literacy—one which emphasizesfunctional literacy over critical literacies, form over content, correctness over ideas—that reflectsmodernist rather than post-modern goals for public education. Similarly, Luce-Kapler and Klinger(2005) found that this test is harming literacy education in Ontario because “it seems to beentrenching dated practices of literacy and literacy testing rather than supporting the evolution ofnew approaches” (p. 169). Similar concerns exist about other standardized writing exams used inCanada (Howard, 2003; Robinson, 2000).

182 D.H. Slomp / Assessing Writing 13 (2008) 180–200

2. Alberta’s English 30-1 writing exam in context1

Alberta’s English 30-1 writing exam is one exam within a larger provincial assessment program.In Alberta, students write achievement test in grades 3, 6, and 9, and they write a series of diplomaexams in grade 12. Alberta’s grade 12 diploma exam program attempts to fulfill three mainpurposes:

• to certify the level of individual student achievement in selected grade 12 courses• to ensure that province-wide standards of achievement are maintained• to report individual and group results (Alberta Education, 2004)

Kane (2002) claims that the validation argument for graduation exams must link inferences fromtest scores to test standards to provincial curriculum. His argument draws a direct link betweenthe curriculum and the exam: If inferences related to certification of student achievement oncurriculum outcomes are to be valid, a clear relationship between the skills assessed by the examand the skills defined within the curriculum must be established.

The next three sections examine the relationship between Alberta’s English 30-1 curriculumand the English 30-1 diploma exam.

2.1. English 30-1 writing curriculum2

In 2002 Alberta’s Ministry of Education published its updated high school English LanguageArts Program of Studies. This curriculum focuses on having students develop skills related tothe six language arts—reading, writing, viewing, representing, speaking, and listening—in anintegrated fashion, believing that facility in one aspect of language use supports facility in others.The curriculum’s expectations for writing skills are comprehensive, reflecting current pedagogicaltheory. Students are expected to:

• develop the ability to assess a text in progress for a broad range of qualities including orga-nizational components, controlling ideas, transitions, supporting details (for completeness andrelevance), reasoning and logic, syntax, diction, phrasal structures, grammatical correctnessand the text’s ability to address audience and purpose.

• demonstrate an ability to critically appraise and modify interpretations, perspectives and opin-ions.

• reflect on experimentation with language, demonstrating how such experimentation impactstheir growth as language users. They are required to appraise their strengths and weaknessesas language users and to select and monitor strategies which they can use to increase strengthsand address weaknesses.

• develop and utilize a range of strategies for forming understandings and for improving languageskills.

• evaluate source material.

1 English 30-1 is a grade 12 academic English course. It is a required course for Alberta students planning to attenduniversity in Alberta.

2 Alberta’s English 30-1 Program of Studies can be found online at: http://www.education.gov.ab.ca/k 12/curriculum/bySubject/english/srhelapofs.pdf.

D.H. Slomp / Assessing Writing 13 (2008) 180–200 183

• integrate new knowledge with old understandings, to support conclusions with relevant detailsand to draw conclusions relevant to findings.

• reflect on their writing in a broad manner, considering how their sense of audience impactstheir choices, how their choice of medium reflects their understanding of content and context,and how their choice of genre compels them to address issues of content and purpose.

The curriculum envisions students who are able to deconstruct rhetorical situations with a viewto selecting strategies and approaches that will help them effectively write for that situation. Itprojects a vision of a student writer who is flexible in his or her thinking, able to adapt, ready toreconsider first ideas, and able to negotiate a complex recursive writing process.

2.2. Alberta’s English 30-1 diploma exam3

Alberta’s English 30-1 diploma exam is divided into a writing component and a reading com-prehension component. My research focused only on the writing component of this exam. Thetotal exam is worth 50% of a student’s final grade in English 30-1.

Students are given three hours to complete the writing exam. It is divided into two parts:The Personal Response to Texts Assignment is worth 40% of the exam; the Critical/AnalyticalResponse to Literary Texts Assignment is worth 60% of the writing exam. The questions for eachpart are always linked thematically. The Personal Response to Texts Assignment is designed tostimulate student thinking for the Critical/Analytical Response to Literary Texts Assignment. Forthis reason, students are encouraged to explore the thematic issue in greater depth in the secondassignment than they had in the first assignment. Students are told that “time spent in planningmay result in better writing” (Alberta Education, 2005, p. 3).

The diploma exam’s content, structure and scoring guides suggest that it is designed to mea-sure the following: knowledge about language structure (the structure of ideas, paragraphs, andsentences); knowledge about language as a tool through which one communicates ideas (ideaformation and support); and knowledge about the creation of voice (how to use diction, syntaxand punctuation for effect). The exam also measures students’ ability to generate, organize andeffectively present one’s ideas within tightly controlled timeframes. As a consequence of thisemphasis on time limits, the exam also seems to place a value on one’s ability to work effectivelyunder pressure (Murphy and Yancey, 2008). These time restrictions also make working throughan often messy recursive process difficult, if not impossible for students (Wolf & McIver, 1998).Instead, the exam measures a limited form of writing process; in its reminders to students it callsfor planning, drafting and polishing while ignoring the need for either exploratory writing orsubstantive revision.

2.3. Comparing the test and the curriculum

Alberta’s English 30-1 diploma exam’s primary goal is to certify student achievement in English30-1. In order for certification related inferences to be valid, the skills measured by the exam mustbe clearly linked to those defined in the related curriculum. Kane (2002) observes, however,that validation studies for graduation exams rarely draw these links between certification relatedinferences and curriculum. He writes:

3 Samples of this exam can be found online at: http://www.education.gov.ab.ca/k 12/testing/diploma/bulletins/examplesstand/default.asp.

184 D.H. Slomp / Assessing Writing 13 (2008) 180–200

the validity arguments developed to support these ambitious claims seldom go beyond theinitial steps in the interpretive argument. The validity evidence that is provided tends toemphasize the inferences from the test scores to achievement on the Test Standards. Theadditional semantic inferences to achievement on the State Standards and to conclusionsabout overall achievement in high school are simply taken for granted (p. 40).

His observation holds true in regards to Alberta’s English 30-1 exam. A comparison of the skillsmeasured by the diploma exam and the skills defined within Alberta’s English 30-1 curriculumreveals significant differences between the two. Table 1 captures the relationship between theskill-sets defined within both the curriculum and the exam. While the skills contained in theoverlapping sections are common to both constructs, the skills contained in the green area areignored by the exam, and the skills contained in the pink are measured by the exam even thoughthey are not contained in the curriculum.

Table 1Comparison of exam and curriculum constructs.

These differences challenge the validity of certification related inferences drawn from examscores. They also pose potential problems for teachers who must decide what knowledge and skillsto focus their instruction on—those required by the curriculum or those measured by the exam.While my research questions were broad, my analysis of data focused on these discrepanciesbetween the curriculum and the exam and their effects on teaching and learning.

3. Methods

This paper reports on the case study component of a mixed-methods study that examined theeffects of Alberta’s English 30-1 diploma exam on the teaching of writing in Alberta (Slomp,2007). Three participants (and their 80 students) were chosen for the case study component ofthis study.

D.H. Slomp / Assessing Writing 13 (2008) 180–200 185

• Anne4 teaches in a rural high school in a primarily rural school district. She has eight yearsexperience teaching. She has served as a member of the English 30-1 diploma exam provincialmarking team. She was in her third year of teaching English 30-1 when she participated in thisstudy.

• Brian teaches in a K-12 school that serves an urban student population. His school is operatedby an independent school board. Brian had eight years experience teaching English 30-1 whenhe participated in this study.

• Heather is the English language arts department head in a large high school that serves anurban/rural population. She has extensive experience serving as a member of the English 30-1diploma exam provincial marking team. She had fifteen years experience teaching English 30-1when she participated in this study.

Participants were selected to represent a range of experience with teaching English 30-1 andwith their level of familiarity with the diploma exam. They were also chosen to represent a rangeof school contexts (rural/urban, public/private, large/small). This range of experience and contextswere important to this study because in addition to focusing on the English 30-1 exam’s impact onhow these teachers were teaching writing, this study examined the contextual factors that eitherenhanced or mitigated the exam’s impact on pedagogical choices.

This study’s design is based on recommendations from a series of articles that focused onissues and research methods involved in collecting consequential validity evidence (Green, 1998;Lane et al., 1998; Linn, 1998; Moss, 1998; Popham, 1997, 1999; Shepard, 1997; Yen, 1998).According to these scholars, studies of test use consequences should:

(a) investigate the “actual discourse and actions that occur around products and practices oftesting;” (Moss, 1998, p. 7)

(b) corroborate data collected from multiple sources (i.e., teachers, students and administrators);(c) develop both comprehensive sources of direct evidence collected in classrooms as well as

more global sources of evidence such as surveys;(d) be highly contextualized, intensive and sustained.

Collectively these authors argue that research into the consequences of assessment would bestbe conducted using a mixed-method design. A classic form of mixed-methods research utilizesboth multiple case study and surveys. Tashakkori and Teddlie (2003) suggest that the strengthof this design is that “[o]ne method gives greater depth, while the other gives greater breadth;hopefully, together they may give results from which one can make better. . . inferences” (p. 16).

The main critiques of mixed-methods research focus on questions related to the ideologicalframework which guides the work. Denzin and Lincoln (2005a, 2005b) argue that mixed-methods studies are almost always grounded in a post-positivist, quantitative orientation, one thatstrips qualitative methods from their ideological home. Qualitative-oriented, social-constructivist-based mixed-methods research is, however, becoming more common. Green (2005) and Mason(2006) argue that qualitative-oriented mixed-methods research can be used to develop complexunderstandings through the application of multiple lenses, perspectives and stances while alsochallenging simplistic answers to complex questions. This study approaches mixed-methodsresearch from a qualitative perspective.

4 All names reported in this study are pseudonyms.

186 D.H. Slomp / Assessing Writing 13 (2008) 180–200

Case study data was collected through classroom observation (one month of class time witheach teacher), 7 interviews with teachers, 10 interviews with students, and an analysis of alldocuments and handouts each teacher used to teach writing in his or her grade 12 English course.Data analysis focused primarily on themes that emerged within and across teacher and studentinterviews. Data from observations and document analysis was then analyzed with a view to thesethemes. This paper reports only on the case study component of this research. Survey data wasalso collected. It focused on student experiences in English 30-1.

3.1. Interview data collection and analysis

Each teacher participated in three interviews, two individual and one group interview. The firstinterview occurred at the beginning of the research project. This interview was semi-structured andlasted between forty-five minutes and one and a half hours. This interview focused on perceptionsof themselves as teachers of writing, a description of their approach to teaching writing, expe-riences that have shaped their approach to teaching writing, their school context and its impacton their approach to teaching, their perceptions regarding the diploma exam, and the exam’simpact on their approach to teaching writing. The purpose of this interview was to begin to collectinformation regarding individual backgrounds, contexts, and perspectives.

The second interview occurred toward the end of the school year, in late May or early June.It occurred after classroom observations and the interviews with students had been completed.This interview was less structured, focusing instead on questions that presented themselves duringthe preliminary analysis of the first round of interview data, during student interviews, or duringclassroom observations. A significant number of questions were designed to help the researcherconfirm or challenge his perceptions of the teacher, his/her practice, and his/her beliefs aboutteaching, assessment, or writing.

The third interview was a group interview which occurred early in the fall during the followingschool year. This interview was free ranging, mostly directed by the teacher participants whocollectively explored their perspectives on a range of issues important to them. They discussedsocietal issues and their impact on teaching writing, issues related to student writing, the differ-ences between their own writing processes and the processes they ask of their students, the valueof writing across the curriculum, and their perspective on the diploma exam. While this interviewtouched on ideas discussed in the previous two interviews, the discussions between teachers helpto enrich and expand these discussions.

Each interview was recorded and fully transcribed. Transcriptions were submitted to the teach-ers for review and comment. Each teacher approved the transcripts of his or her interview. Totallength of transcripts for all three sets of interviews came to 88 pages of single spaced text or 51,650 words. Each interview was analyzed first with a focus on general themes: background/context,pedagogical issues, assessment issues. Some aspects of the teachers’ commentary reflected morethan one general theme; for example, comments which discussed the relationship between bothassessment and pedagogy were included into both the pedagogical and the assessment themes.A second finer layer of analysis of general themes then took place. This analysis resultedin 16 sub-themes. Within the category “background/context” seven themes sub-themes wereidentified:

• Motivation• School context• Personal writing (style and process)

D.H. Slomp / Assessing Writing 13 (2008) 180–200 187

• Previous educational experiences• Attitude toward standardized assessment• Professional development experiences• Perspectives on writing pedagogy

Within the category “pedagogical issues” six sub-themes were identified:

• Writing process, perspectives and practice• Personal writing process and its impact on pedagogy• Pedagogical tools/focus• Planning for instruction• Perspectives on students• Issues of importance

Within the category “assessment issues” three sub-themes were identified:

• Perspectives on the English 30-1 diploma exam• Perspectives on the English 30-1 diploma exam’s construct• Perspectives on the English 30-1 diploma exam and its influence on pedagogy

This thematic scheme provided a structure within which to further refine the analysis of theinterview data. Within these categories, teachers’ comments were grouped around similar sub-themes. For example, Brian commented across the three interviews on the issue of student useof time for completing writing assignments. These comments were first categorized under “ped-agogical issues”, from there under “issues of importance”, and then they were further groupedwithin that category around the issue of “time”. Following an approach used by Rex and Nelson(2004), I then created a pastiche which brought together the teachers’ comments around each ofthese specific categorizations. The resulting pastiche regarding Brian’s comments about time wasfinalized as follows:

Partly I think [students follow a limited writing process] because they tend to wait to thelast minute so they don’t allow themselves time to rework things, to look at it and ask doesmy organization make sense? Or, am I putting too much emphasis on this or not enoughemphasis on that? I don’t think that is always the case, there are some kids that work veryhard at trying to change it, trying to make it better, but I do think that is one of the reasons.That is why I tend to give shorter deadlines now. . .. I have learned, you can give studentsthree weeks to do an essay and chances are most of them are punching it out the last night.So I think shorter deadlines work. . .. I think long due dates are good for some kids becauseit gives them—I know for myself when I was in university you’d have an assignment thatwas due a month down the road and it’s not that you right away start writing but you werethinking about it—but I find that for high school kids they don’t do that and they tend toleave things. So I try to force the process by just giving short due dates.

This pastiche is a compilation of three separate comments on the issue of time in relationto student writing. Collectively this pastiche captures more completely Brian’s perspective onstudent use of time than the individual comments would have. Rex and Nelson (2004) contendthat no representation of an individual is ever complete and that a pastiche of this nature also doesnot capture perspectives or individuals completely but that such a pastiche can provide a means

188 D.H. Slomp / Assessing Writing 13 (2008) 180–200

to more eloquently and purposefully represent an individual’s perspective while maintaining thatindividual’s voice.

3.2. Classroom observation data collection and analysis

The second method of data collection involved classroom observations. The duration of obser-vations for each teacher and depended on the unit he or she had chosen for me to observe andwhether or not the school ran on a semestered or full-year system. Observations took place dur-ing May and early June. Anne’s English 30-1 course was semestered. I observed her class everymorning for two and a half weeks. Brian and Heather’s classes were full year. I observed Brian’sclass for three and a half weeks and Heather’s for four weeks. I also came to Heather’s three exampreparation workshops offered early in the morning before school began.

Classroom observations focused on teaching style, pedagogical stance, student and teacher’scomments regarding the diploma exam, and classroom environment. Field notes capturing theseobservations were taken.

The purpose of collecting the observation data was to enable the researcher to better understandthe teacher, the teaching environment, and the teaching practice. The interview data provided theresearcher with the teachers’ construction of him/her self as a teacher; the classroom observationsenabled the researcher to also construct a representation of the teacher based on direct observa-tions. These constructions were compared to one another to identify coherence and dissonance.Aspects of apparent dissonance were then discussed in follow up interviews. For example, Annesharply criticized the diploma exam constructs during the first interview, but I noticed duringobservations that she was quite focused on the exam in her teaching. The contradiction implicitin these differences puzzled me, so during the second interview I asked her about the apparentcontradiction. She responded as follows:

It is a complete paradox; You can’t do that right. You can’t say the exam is not a fairassessment but I am going to use it anyway. But that is what we do. . .. Because it is reallyhard to reconcile those two things. You say well, the expectations of the administration,and of the parents, and of the students themselves, is that you prepare me for the exam, okfine, I can do that but personally I don’t feel that this exam is a fair assessment. But thatis the expectation, so then I have to balance these two, and wrestle with these two in theclassroom, and say okay, I am going to let the exam go a little bit and we are going to dosomething wild and creative and have a little fun with this piece of literature rather thanfocus exclusively on the exam. It is really hard to reconcile those things together.

This process enabled the researcher to both develop a more complete understanding of thecomplex matrix of variables that influence teaching and learning within a high-stakes testingenvironment and to complete case study profiles which more completely reflect the teachers’views and practices.

The analysis conducted with the interview data provided the foundation for the analysis ofthe observation data. Observations were linked to themes developed in the interview analysis andwere then used to support or enhance case study profiles.

3.3. Document data collection and analysis

Teacher participants were asked to provide a copy of each writing assignment or teachingdocument related to writing that they used with their English 30-1 class. Heather provided 125

D.H. Slomp / Assessing Writing 13 (2008) 180–200 189

handouts which included assignments, writing support materials, marking guides and courseor unit outlines. Anne provided 44 complete lesson plans which included assignments, supportmaterial, marking guides, unit outlines. She also provided two, two-inch binders which containedher diploma exam preparation materials. Brian provided 26 writing assignments, which includedmidterm exams, marking guides, end-of-unit assignments, and major projects. While these mate-rials do not represent the complete number of assignments and handouts given by each teacherin English 30-1 class, they do provide significant detail regarding the range and the focus of eachteacher’s approach to teaching writing.

Each set of teacher documents was analyzed with a view to the constructs the assignmentswere attempting to measure. Specifically, analysis focused on whether or not the assignmentswere focused on the construct being measured by the diploma exam. Important elements of theexam construct that were focused on included:

• Was the assignment designed as a response to literature or not?o If so, was the assignment designed as either a personal or critical response to literature?

• Did the assignment specify a genre in which students must respond? If so, what genre?• What time lines were included in the assignment?• What marking criteria were being used to judge student performance on the assignment?

Handouts were also analyzed in relation to each teacher’s comments regarding his or herteaching practice. For example, Brian commented often in the interviews on the issue of time.He suggested that because students did not use their time effectively, waiting often until the lastday to work on assignments, he began shortening due dates on his writing assignments. Whenanalyzing Brian’s writing assignments, I focused on comments regarding the use of time to seewhat other strategies he might have used to encourage students to use their time more effectively.

4. Three case studies

In the sub-sections that follow, I present condensed profiles5 of each of the three teacherswho participated in this study. These profiles draw on classroom observations, interviews, anddocument analysis of classroom handouts and assignment. A discussion of issues arising acrossthe three cases follows the profiles.

4.1. Case study 1: Anne

Anne has been teaching English language arts (ELA) for eight years. Prior to teaching, sheworked as a journalist for a small town paper. Her teacher-education program emphasized process-oriented approaches to teaching writing, it provided her with direct experience working withstudents on their writing, and it enabled her to understand, first hand, the processes studentsengage in when they write.

Anne teaches in a medium-sized rural school (355 students in grades 7–12). The school’sadministration and the larger community are focused on standardized exam scores as an importantindicator of the school’s success. Exam results are posted on the school’s website. While not official

5 Full profiles can be found in Slomp (2007).

190 D.H. Slomp / Assessing Writing 13 (2008) 180–200

school policy, year-end grades, Anne reports, are expected to be no more than 5% higher or lowerthan diploma exam scores.

Anne began her teaching adhering to a transactional approach to literature and a process-oriented approach to writing. She had her students complete a range of creative projects as theyconstructed their understandings of the texts they were studying. In recent years, though, Anne hasshifted toward a transmission-oriented and product-centered approach to teaching. She attributesthis shift in pedagogical style to her transactions with the diploma exam and to the pressures thatthe school community places on her to ensure student classroom marks and exam scores alignwith one another. Anne explicitly teaches to the diploma exam.

Anne’s decision to teach to the exam might cause you to think that she views the exam positively.The opposite, however, is true. Anne teaches to the exam because she understands explicitly itsconstruct flaws.

Anne’s criticism of the exam is threefold: it only assesses a student’s written responses toliterature while ignoring other forms and purposes for writing; it measures students’ ability towrite under pressure and severe time restrictions; and it involves writing under artificial conditions(e.g. no access to resources, no opportunities for collaboration). Consequently, Anne believes thatthis exam does not accurately assess students’ true writing abilities. She says,

I don’t necessarily feel that [the English 30-1 diploma exam] is a fair assessment, I thinkthe writing component of the diploma itself is probably the least indicative of what a studentcan do, it is pressure writing, it is writing out of context, I mean for all the things we teachwriting to be, it is not, it is the opposite of everything we want it to be. . .

Anne recognizes that she has a professional responsibility to prepare her students for the exam.Teaching to the exam’s construct (including its flaws), she feels, is the best way to prepare studentsfor this high-stakes exam. Anne’s students write across a range of genres throughout the year, theprimary focus of her writing program, however, is fixed on the diploma exam. Her units focus onthemes that have appeared in previous diploma exams and her final assignments for each unit aremodeled after diploma exam questions. Anne’s process work focuses on prewriting and draftingstrategies. She provides students with a few class periods at the beginning of a major writingassignment so that they can discuss ideas and so that they can plan and draft their paper. Sometime for peer feedback is given, though little in-class time is provided for substantive revision.This emphasis on early stages of process reflects the exam’s emphasis on first draft writing.Throughout the year, Anne has her students deconstruct previous exam questions, she has themassess exemplar papers, their own papers, and their peer’s papers using the diploma exam rubric.She describes her approach:

[The exam] directly influences the way I teach writing. You know when we look at the essayswe do a lot of dissecting of other people’s essays. So I will have them writing essays andthey will have to go around and use the rubric, the actual [exam rubric] that they give youwhen they mark the test and then they have to score each other’s papers. I think that is thebest way for them to get a feel for what is good writing and what is not good writing. Wedo a lot with the examples that are on the Alberta Education website. We take them apart,“what makes this good, what doesn’t make this good?”

Anne recognizes the tension between her views on the exam and her actual teaching practice.She observes:

D.H. Slomp / Assessing Writing 13 (2008) 180–200 191

Valid or not, the darn thing still exists, and students are going to have to write it. I meanthe only way you could eliminate the tension is to eliminate the exam, and that is not goingto happen, that is not going to happen.

4.2. Case study 2: Brian

Brian is in the eighth year of his teaching career, a career he began in his thirties after completinghis BEd majoring in math and minoring in English. He teaches English language arts, math andscience. Brian teaches in a small (180 students) K-12 school on the edge of a large city. Brian’sschool community and school administration are not overly concerned with diploma exam scores.While Brian expects the final grades he assigns to be close to the marks students obtain on thediploma exam, this expectation comes from internal motivations rather than external pressure.Brian is committed to ongoing professional growth and development. He is a member of theEnglish Language Arts Council of the Alberta Teachers’ Association (ELAC) and regularly attendsits annual conventions. Many of the assignments he utilizes in his English 30-1 program are drawnfrom sessions he has attended at ELAC conferences. As an English minor, Brian felt unpreparedto teach writing. He claims to have received limited instruction on composition or in compositionpedagogy. Brian comments:

Teaching writing is [something] that I really struggle with a lot because, how do you teachwriting effectively? I know for myself. . . I don’t feel that I have a real great way of teachingit. I do know that I can identify things that the kids can improve on and I try to work thatinto writing units but when it comes to [teaching] writing I know that there is a lot of roomfor improvement.

Brian employs a flexible approach to teaching writing. He mixes both teacher-centeredtransmission-oriented methods with student-centered, project-based methods. He assigns bothcreative projects and formal essays to his students. He provides room and space for students todevelop and utilize writing processes that work for them. His project-based approach to teachingensures that students have much in-class time to work on their writing. Students mostly use thistime for prewriting and drafting. Limited time is spent on revision and editing. While Brian doesnot explicitly teach the five-paragraph essay structure, he values organization in writing and hesees this format as providing a structure both for ideas and for process. He does not discourage hisstudents from using this approach. Brian teaches the technical aspects of writing (grammar andmechanics) using worksheets; he does not explicitly connect this aspect of his writing programto student’s actual writing. Virtually all of Brian’s writing assignments are designed as responsesto literature. While Brian tries not to focus on teaching explicitly to the exam, he does recognizethat he has a professional responsibility toward ensuring that his students are prepared for it.Consequently, Brian designs every end-of-unit writing assignment to reflect the diploma examquestions. Brian comments on the exam’s influence on his course design:

When planning my English 30-1 course, I think I use both [the Diploma Exam and the SeniorHigh English Language Arts Program of Studies] because. . ..the Guide to Implementationis actually a pretty good guide. . .: I did a heroes unit this year, the idea came out of theGuide to Implementation; I didn’t’ use a lot of the things out of it but the idea came fromthere. But then I also used the diploma exam model. I’ll grab a personal response questiononce in a while and fashion it to a novel that we’re studying, or work that we’re studying,so that they get. . . an idea of what kind of question they’re going to get.

192 D.H. Slomp / Assessing Writing 13 (2008) 180–200

The exam explicitly influences Brian’s teaching more in relation to his assessment practices thanhis teaching practice. When assessing student writing, Brian develops rubrics that are appropriateto the assignment; his rubrics, however, are predominantly based on the diploma exam scoringguide. Brian explains:

[Marking my students’ work] is probably where the exam comes most into play, becausebasically I set up my rubric along the same lines as the departmental. . .. It is helpful forme because it helps me to focus on what I should look for, but it is good for the students sothat when they get to the exam they are not going to be facing a whole different markingstyle. . .. [When I see the exam results] I am glad when I am not way off (or that the examis not way off) from what I perceive to be where the kid is at, . . . that is partly why I usedthe [diploma exam]: . . . I want to mark the kid where they’re going to mark the kid.

Brian both values the diploma exam and is critical of it. He values it because he believes that ithelps prevent grade inflation, it validates teaching, and it provides an objective indicator of studentwriting ability. Brian’s criticism of the exam relates to perceived construct flaws: The exam onlyassesses students’ essay responses to literature and not the broad range of writing required bythe program of studies. He is also concerned that the exam assesses students’ ability to writeunder pressure. He believes that some students who are good writers are not provided an adequateopportunity to demonstrate their skills given the problematic design of the exam. He comments:

I like to see [the spread between school and exam scores] close because in a way it validatesmy teaching, but on the other hand the exam is testing kids one day—actually over acouple of hours—in a high pressure situation in which some kids can shine and somekids can’t and I don’t think it always reflects a student’s ability. There are some kids whoneed to take the time to really work things through. So really, the exam score is a goodindicator of how a student might work under the pressure of post secondary educationbut I don’t think it necessarily indicates how good a writer every student is. Some kidscan write wonderfully in any situation, but some can’t. . .. So yeah, the exam tests writingin a certain situation but I don’t think it tells every kid this is what kind of writing youcan do.

While Brian’s pedagogy is not explicitly focused on the diploma exam, the exam does influencehis planning and his assessment of student writing. Its influence can be attributed to flaws in theexam’s construct. Brian recognizes that he has a responsibility to prepare his students for theexam, he also recognizes that his credibility as a teacher to some degree hinges on his abilityto assess students in a manner that is consistent with the diploma exam. If the exam measuredstudent writing more effectively, and if it reflected the writing requirements in the curriculum morecompletely, Brian would feel less internal pressure to directly and consciously prepare studentsfor this exam.

4.3. Case study 3: Heather

Heather is in the fifteenth year of her teaching career. She is currently the ELA departmenthead at a large high school (700 students). Her school is situated in a bedroom community of alarge city. Heather has a BEd in English. Though she feels that her undergraduate training hasgenerally served her well, she notes that it did not include any explicit instruction in methods forteaching writing. Heather has augmented this training with personal study.

D.H. Slomp / Assessing Writing 13 (2008) 180–200 193

Heather is a nurturer, she builds strong relationships with her students and uses those rela-tionships to encourage and support learning. Heather’s teaching stems from a dialogic view oflanguage learning. She describes her classroom:

My favorite way of being is when we push the desks away, chairs in a circle, and we sitthere and have a discussion, sometimes a question can serve as a whole block, . . . it ismore the group dynamic as learning rather than me teaching them, I am not an informationdisseminator.

Heather extends this conversation into her students’ writing. In her responses to their ideas, shewrites lengthy marginalia and extended comments at the end of each paper. Heather writes withand for her students. Each of her assignment handouts is written in her voice. Generally, theseinclude long introductory discussions which help establish the context and purpose of the piecewhile also modeling the skills, thinking, and knowledge she expects her students to demonstratethrough the assignment.

Heather emphasizes practice and skill development in her writing program. She marks up to60 writing assignments per student each year. Heather’s work on writing skills focuses on variousaspects of writing—from genres, to introductions, to vocabulary, to transitions, to process tips, toessay structure—though her primary emphasis is on the development of voice. Heather does notwork with her students on multiple drafts of single papers, rather she treats each paper as a draftin a students’ ongoing development as a writer. She provides extensive comments on each paperand expects her students to work with those comments when writing subsequent papers.

Heather handles writing process in her teaching in two ways. First, she provides rich in-classopportunities for students to engage in prewriting and preplanning activities. Second, she providesthem with tools and strategies for organizing their ideas. Heather expects her students to work ontheir drafting and revision outside of class. When she collects and assesses students writing, shecomments on ideas and organization, but she does not expect students to revise assessed work forfuture submission. She comments on her choices:

I can’t . . . give 60 some pieces of writing [per year] where you give first draft, I give itback with comments, you give a second draft, I give it back with comments. So that processhappens between copies, between different assignments. They know that it is not the endof the road, for sure not, they don’t have to improve everything this time, because so manyopportunities are going to be coming their way. . ..

For Heather, each assignment builds toward the ultimate final assignment, the diploma exam.Heather has a multilayered perspective on the diploma exam. She values the exam itself but

she is often uncomfortable with how it is used. Heather appreciates the rigor and the standardsestablished by the diploma exam. She appreciates that her students have to work all year towarddeveloping the skills and the thinking required to successfully complete the diploma exam. Sheencourages them to think of the exam, not as something to dread, but as a celebration of whatthey have learned. At the same time, however, Heather is concerned that this one celebrationconstitutes 50% of their grade in the course. She is concerned that students who are not able togenerate their best work on that particular day will receive a grade that does not truly reflect whatthey are capable of.

Heather is also concerned that the exam is used to make judgments about teachers and schools.She argues, however, that exam scores only reflect part of what happens in schools, they donot tell the whole story. Consistent with this perspective, Heather does not explicitly focus herteaching toward the diploma exam (though, each spring she does offer a week of early morning

194 D.H. Slomp / Assessing Writing 13 (2008) 180–200

one-hour diploma exam preparation sessions). In spite of this determined separation between inclass work and explicit exam preparation instruction, Heather’s pedagogy aligns quite naturallywith the diploma exam’s construct. Similar to the exam’s construct, she focuses on ideas, voice,and prewriting while paying limited attention to process and revision. The vast majority of herwriting assignments, too, are similar in design to the diploma exam assignments. She explainsthe alignment between the exam and her pedagogy as follows:

So of course they are diploma preparing all the way along, but it isn’t labeled as that, andit isn’t directed at that test until that last go round. . .. For some folks that idea of teachingto the test gives them a method and a reason. And that is a good thing for them. For me it isnot. Philosophically I want to discredit that I am a political vehicle. That is probably justa personal angst that I don’t want to be pigeon holed like that. I’d like to think that there issomething more noble in the whole thing than just preparing for a test in case you need themarks to go to university. It seems so narrow, it seems so small thinking.

Her exam preparation work stems from a sense of professional responsibility toward her stu-dents rather than a philosophical affinity with the purposes and goals of the diploma exam program.Heather focuses on the larger picture, on preparing students for life rather than merely preparingthem for an exam, yet she also recognizes that at times preparing for an exam is part of preparingfor life.

5. Analysis across case studies

Each teacher’s pedagogical choices are shaped by numerous influences including, personality,previous educational experiences, and transactions with the program of studies and the diplomaexam. The analysis that follows will focus on the diploma exam’s influence on their choices andon the contextual factors which either enhanced or mitigated it.

5.1. Mitigating and enhancing factors

The diploma exam helped shape each teacher’s pedagogical choices; the degree, or intensityof this impact, however, varied from teacher to teacher. Anne was most explicitly focused onpreparing students for the exam while Heather was least explicitly focused on exam preparation.Brian fell somewhere between Heather and Anne.

5.1.1. Teacher’s attitude toward the examEach teacher’s attitude toward the diploma exam was a major factor contributing to its impact

on his or her pedagogical choices. While all three teachers valued the exam to some extent, themore critical of the exam’s construct flaws a teacher was, the more explicitly he or she focusedon teaching to the exam. Anne is most articulate and forceful in her criticism of the exam’sconstruct. She argues that the exam measures student writing ability in relation to a narrow setof genres: student ability to generate personal and critical responses to text, most often essayresponse. She points out that the program of studies calls for a much broader range of writing tobe performed in the course including poetic, narrative, and functional writing. She also observesthat given the exam design, student ability to write under pressure and within time constraints isbeing measured. She notes that this is an artificial and unrealistic context, one that does not reflectwhat happens in authentic writing contexts. Brian also voices these criticisms though he is moremoderate in his critique. Heather on the other hand is not too troubled by the exam’s construct.

D.H. Slomp / Assessing Writing 13 (2008) 180–200 195

In discussion with her students she makes it clear that if they follow the course in the mannershe has designed it, they will be well prepared for the exam. Her comments suggest that herpedagogical approach naturally conforms to the exam’s construct. Heather, who is least criticalof the exam’s construct is also least focused on direct or explicit test preparation work, whileAnne who is most critical of the exam’s construct is also the most focused of the three teachers ondirectly and explicitly teaching to the exam. Brian sits in the middle in relation to both criticismand practice.

5.1.2. Pressure to align school marks with exam scoresAnne, Brian, and Heather all felt some degree of pressure from parents and administrators

that students should be well prepared for the exam and that their exam scores should correlatewith their school scores. The pressure Anne felt in this regard was significantly greater than thatof Brian and Heather. She described meetings with her administrative team and comments fromparents which made these expectations abundantly clear. The pressure Heather felt in relationto marks stemmed more from some parents resenting her high standards and expectations ratherthan direct pressure related to exam performance. Her school administrators pressured teachers toensure strong exam performance and alignment of school and exam scores more through indirectthan direct means: department heads reported during the September staff meeting on previousyear exam scores, and cakes were brought in to the staff meeting to celebrate success on diplomaexams. Brian described feeling little to limited pressure regarding student performance on theexam.

The pressure to align exam scores and school scores seems to have further served to intensifyAnne’s focus on directly preparing students for the exam. Anne comments on this, claiming thatgiven the construct problems with the exam, the only way to ensure alignment of test scores andschool scores is to narrow one’s teaching so that it focuses on the exam. Ironically, this is whatHeather has done in her own practice, though not explicitly. Her confidence in her students’ abilityto achieve school scores close to exam scores is based on the fact that the constructs upon whichthe school scores are based is (given Heather’s approach to teaching) quite naturally aligned withthe diploma construct. Anne on the other hand describes how in her first year of teaching English30-1 she engaged her students in a wide range of activities that, while valuable educationally,were difficult to assess and were not necessarily consistent with what the exam was measuring.After submitting term marks that were on average 20% higher than the diploma exam marks shedecided that she needed to dramatically alter her pedagogical stance in order to ensure a greateralignment of school and exam scores.

5.1.3. Years of teaching experienceA further factor which seems to impact the degree to which the exam is influencing pedagogy

seems to fall along a continuum of teaching experience. Heather is least focused on teaching tothe exam, yet she describes how during her career this degree of emphasis has changed. In thebeginning she was interested in finding out what the exam was all about and so she studied theexam and had students do exam preparation activities: around years seven and eight of her careershe claims to have focused most explicitly and concretely on preparing students for the exam, andthen from that time to the present (year fifteen) her explicit focus on the exam has dissipated to thepoint where she now discourages students from being too focused on the exam during the year.Anne and Brian are both in year eight of their teaching careers, though Brian has been teachingEnglish 30-1 for about five years longer than Anne has. Anne is most explicitly focused on theexam, Brian far less so. This seems to be consistent with Heather’s experiences.

196 D.H. Slomp / Assessing Writing 13 (2008) 180–200

5.2. Effects on instructional choices

While differences exist related to the exam’s influence on each teacher’s pedagogy, severalkey similarities across the three teachers’ practices are evident. These similarities relate to thenarrowing in instruction in terms of processes taught, range of assignment designs used, andvariety of marking schemes employed.

5.2.1. Narrowing of processes taughtAll three teachers understand writing process to include planning, drafting, revising, and pol-

ishing. It is clear, through interviews, observations, and writing assignment designs, however, thatthey focus mostly in their instruction and in the class time they make available for completingassignments, on teaching students to engage in the beginning stages of writing process ratherthan the final stages of revision and polish. Significantly, the elements of process the teachersfocus on are the elements emphasized by the diploma exam, while the elements neglected by thethree teachers are elements that are also neglected by the exam. This approach to process is bothsupported and enabled by an exam that does not measure or allow for writing process to occurin a rich recursive manner. Given Brian, Anne and Heather’s commitment to preparing studentsfor the diploma exam, it would be likely that had the exam required students to engage in a morefulsome process, these teachers would have ensured that their students were required to completeclassroom assignments utilizing a more complete process.

5.2.2. Narrowing of assignment designsVirtually every assignment Heather, Anne and Brian provided to me for analysis contextualized

the writer–reader relationship as being between student and teacher, a structure which replicatesthe diploma exam’s structure. The vast majority of their writing assignments focused on essaywriting, and a majority of these focused on personal and critical responses to literature assignments.These conclusions reflect findings by James Britton, Burgess, Martin, McLeod, and Rosen (1975)who observed that in courses where students were preparing for external examinations writingassignments were more narrowly focused on both the audience and the type of writing required bythe external exam. In addition to narrowing their instructional focus, all three teachers designedtheir instruction to focus students on developing the skills needed to deconstruct and respondto diploma exam questions. Brian and Anne did this throughout the year while Heather did thisduring her early morning exam preparation sessions. Brian explains his rationale for designingassignments with an explicit exam preparation focus:

I don’t do a lot of real exam prep and I probably should do more just to get them comfortable.I do take exam questions from the past and I’ll get them to write an essay. . .. So that waythey are at least comfortable with those types of questions. I could probably do more, takinga few questions from over the years and have them. . . at least writing introductions thataddress those questions. So it is something that I think I have to put a little more emphasison. . .. It is good to get them comfortable with reading a question, deciphering it and sayingok I can use this or I can use that.

Brian’s goal is to help students become comfortable with trying to understand both the structureand the diction contained in the exam questions, a skill that will help them more efficientlydetermine how to respond to the question.

The practice of actively teaching students to deconstruct previous exam questions is oftencriticized in the assessment literature as being an unethical teaching practice, one that pollutes

D.H. Slomp / Assessing Writing 13 (2008) 180–200 197

test scores (Volante, 2006). However, it is clear in the case of the three teachers participating inthis study, that the motivation behind this practice is not driven by a desire to artificially inflatetheir students’ test scores, but rather by a desire to ensure their students are able to understand a setof questions that are often written using syntax and diction that is unfamiliar to them. Teachingstudents how to decode previous diploma exam questions is consistent with their professionalcommitment to ensure that their students are well prepared to write this high-stakes exam.

5.2.3. Narrowing of marking schemesIn addition to narrowing their instructional focus, Anne, Brian and Heather narrowed their

marking schemes. All three teachers derive their marking schemes from the diploma exam rubric.In many cases they use the rubric exactly as it has been developed by Alberta Education. Therationale for this choice is two fold. On the one hand, using the diploma exam rubric enablesstudents to get a sense of the scoring criteria developed for the diploma exam, and on the otherhand it enables the teacher to ensure to that exam scores and school scores reach some degreeof alignment. Linda Mabry (1999) argues that in high-stakes writing assessment contexts it isunderstandable that teachers use exam scoring guides to inform instruction and to explicitlyprepare students for the test. She cautions however that standardized rubrics in high-stakes test-ing contexts are “overwhelming the writing curriculum” (p. 676). She argues that the rubricsreceive unprecedented priority in classrooms so that they become the focus of instruction andof student writing. As such, the rubric further narrows the instructional focus of the writingclassroom.

6. Conclusions

The significance of this exam’s influence on pedagogy is readily demonstrated when we con-sider the extent to which each of the teachers involved in this study is trapped between the examand the curriculum in terms of their writing pedagogy. All three teachers, while they acknowledgethe importance of writing process, and while they provide opportunity for students to engage in aprocess-oriented approach to writing, rarely engage with students in their processes. When theyare involved in student processes it is almost always with a focus on the prewriting, planning,and initial drafting elements of processes. Their work with writing processes does not explicitlyacknowledge that writing processes are recursive. They collect final work, not work in progressand so do not often take opportunities to encourage revision as a means of reconceptualizing orexpanding ideas. While Heather’s assignments are designed to encourage students to use writingas a means of discovery, Brian and, especially, Anne encourage students to develop their ideasbefore they write and so see writing as a means to communicate developed ideas rather thanas a process through which ideas are developed. In the same vein, all three teachers seem lessconcerned about purpose and occasion than they do about structure and audience; in their writingassignments audience is almost always defined as the teacher. Similarly, while all three teachersrequire their students to write in a number of modes, their overwhelming focus is on expositorywriting. All three teachers believe that writing is a skill that can be taught, but at times seem ata loss to explain exactly how this is done. Only Heather writes with her students. And in theirassessment of student writing they focus on the product: idea development, structure, style, andusage. Each of these practices reflects the construct and process defined within the English 30-1diploma exam rather than that defined within the curriculum.

As Brenan (2006) points out, the driving goal behind test-based accountability programs isthe enhancement of teaching and learning. Alberta’s English 30-1 diploma exam fails to live

198 D.H. Slomp / Assessing Writing 13 (2008) 180–200

up to this goal; rather, because of the limited theory of writing that underpins its design, itsupports and even encourages problematic pedagogical choices in each of the cases examined.In the measurement literature, teachers are frequently criticized when they teach to the test. Thispractice is considered unethical. The evidence in this study, however, suggests that it is the flawsin the test itself that most significantly promote this practice. Resnick and Resnick (1992) arguethat in terms of a test’s influence on pedagogy, “you get what you assess, you do not get what youdo not assess, [and that it logically follows, you must] build assessments toward which you wanteducators to teach” (p. 59). While educators and curriculum theorists certainly might disagreewith this sentiment, it does point—especially in cases of a test’s negative influence on teaching andlearning—squarely to assessment professionals and testing agencies’ responsibility for addressingnegative consequences.

Recognizing this responsibility, Shepard (2006) argues for assessment designs that seam-lessly integrate with instruction and that reflect a similar philosophical stance to that of theteacher and the curriculum. Assessment tasks, she argues, must embody the “full range anddepth of what we say we want students to understand and be able to do” (p. 639). Sheconcludes:

Ideally an external assessment that was well aligned with conceptually rich learning goalswould have positive impacts on instruction by exemplifying significant learning targets,provide useful feedback to teachers about curricular strengths and weaknesses, and verifyindividual student’s attainments. The authors of Knowing What Students Know (Pellegrinoet al., 2001) envisioned for the future a more balanced and coherent assessment system,where formative classroom assessment would receive attention equal to that of external,high-stakes tests and where classroom and external assessments would be coherently linkedto the same underlying model of learning. (Shepard, 2006, p. 639)

Shepard (2000) and Robinson (2000) argue that while curriculum theory and pedagogical prac-tices have shifted toward cognitive and social constructivist theories of knowledge and learning,standardized assessment practices have lagged behind, remaining entrenched in positivist per-spectives and transmission-oriented approaches. This study demonstrates the consequences forteachers and students that arise when this gap is not addressed. If we want to see improvements ineducation, however, this gap must become a focus point for future research and public discussion.While the body of research into the effects of assessment on pedagogy and learning is growing,more research is required. This research must be used to inform, shape, and stimulate a criticalpublic debate about the effects of test-based accountability on teaching and learning. Perhaps thisresearch and the debate it promotes can help ensure that standardized assessments are helping,not harming the pursuit of excellence in the teaching of writing.

Acknowledgement

The author would like to thank Liz Hamp-Lyons and two anonymous reviewers for theirthoughtful advice on revising and strengthening this article.

References

Alberta Education. (2004). Diploma examinations program. Retrieved March 29, 2004, from http://www.learning.gov.ab.ca/k 12/testing/diploma/dib gib/examinationprogram.asp

D.H. Slomp / Assessing Writing 13 (2008) 180–200 199

Alberta Education. (2005). English language arts 30-1 June 2005 writing assignments. Retrieved December 15, 2006,from http://www.education.gov.ab.ca/k%5F12/testing/diploma/bulletins/examples stand/default.asp

Brenan, A. L. (2006). Perspectives on the evolution and future of educational measurement. In: R. L. Brennan (Ed.),Educational measurement (4th ed., pp. 1–16). Westport, CT: Praeger Publishers.

Britton, J. N., Burgess, T., Martin, N., McLeod, A., & Rosen, H. (1975). The development of writing abilities (pp. 11–18).London: McMillan.

Cheng, L., Fox, J., & Zheng, Y. (2007). Student accounts of the Ontario Secondary School Literacy Test: A case forvalidation. The Canadian Modern Language Review, 64 (1), 69–98.

Crawford, L., & Smolkowski, K. (2008). When a “sloppy copy” is good enough: Results of a state writing assessment.Assessing Writing, 13 (1), 61–77.

Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In: N. K. Denzin &Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (3rd ed., pp. 1–32). Thousand Oaks: Sage Publications.

Denzin, N. K., & Lincoln, Y. S. (Eds.). (2005). The Sage handbook of qualitative research. Thousand Oaks: SagePublications.

Fox, J., & Cheng, L. (2007). Did we take the same literacy test? Differing accounts of the Ontario Secondary SchoolLiteracy Test by first and second language test-takers. Assessment in Education: Principles, Policy & Practice, 14 (1),9–26.

Green, D. R. (1998). Consequential aspects of the validity of achievement tests: A publisher’s point of view. EducationalMeasurement: Issues and Practice, 17 (16–19), 34.

Green, J. C. (2005). The generative potential of mixed methods inquiry. International Journal of Research & Method inEducation, 28 (2), 207–211.

Hamp-Lyons, L. (2002). The scope of writing assessment. Assessing Writing, 8, 5–16.Hillocks, G., Jr. (2002). The testing trap: How state writing assessments control learning. New York: Teachers’ College

Press.Howard, P. (2003). “Walking the talk” in assessment: Deconstructing standardized tests in the English language arts.

English Quarterly, 35 (3–4), 24–28. Retrieved October 14, 2008, from CBCA Education database (Document ID:620748211).

Huot, B. (2002). (Re)Articulating writing assessment: Writing assessment for teaching and learning. Logan, Utah: UtahState University Press.

Kane, M. T. (2002). Validating high-stakes testing programs. Educational Measurement: Issues and Practice, 21 (1),31–41.

Klinger, D. A., Deluca, C., & Miller, T. (2008). The evolving culture of large scale assessments in Canada. CanadianJournal of Educational Administration and Policy, 76. Retrieved September 4, 2008 from https://www.umanitoba.ca/publications/cjeap/articles/klinger.html

Koretz, D., & Hamilton, L. S. (2006). Testing for accountability in K-12. In: R. L. Brennan (Ed.), Educational measurement(4th ed., pp. 531–578). Westport, CT: American Council on Education/Praeger.

Lane, S., Parke, C. S., & Stone, C. A. (1998). A framework for evaluating the consequences of assessment programs.Educational Measurement: Issues and Practice, 17, 24–28.

Linn, R. L. (1998). Partitioning the responsibility for the evaluation of the consequences of assessment programs.Educational Measurement: Issues and Practice, 17, 28–30.

Luce-Kapler, R., & Klinger, D. (2005). Uneasy writing: The defining moments of high-stakes literacy testing. AssessingWriting, 10, 157–173.

Mabry, L. (1999). Writing to the rubric. Phi Delta Kappan, 80, 673–680.Mason, J. (2006). Mixing methods in a qualitatively driven way. Qualitative Research, 6 (1), 9–25.Meaghan, D. E., & Casas, F. R. (1995). On standardized achievement testing: Response to Freedman and Wilson and a

last word [Electronic Version]. Interchange, 26 (1), 81–96.Moss, P. A. (1998). The role of consequences in validity theory. Educational Measurement: Issues and Practice, 17, 6–12.Murphy, S., & Yancey, K. B. (2008). Construct and consequence: Validity in writing assessment. In: C. Bazerman (Ed.),

Handbook of research on writing: History, society, school, individual, text (pp. 365–385). New York: LawrenceEarlbaum Associates.

Pinto, L., Boler, M., & Norris, T. (2007). Literacy is just reading and writing, isn’t it? The Ontario Secondary School Liter-acy Test and its press coverage. Policy Futures in Education, 5 (1), 84–99. http://dx.doi.org/10.2304/pfie.2007.5.1.84

Popham, W. J. (1997). Consequential validity: Right concern-wrong concept. Educational Measurement: Issues andPractice, 16, 9–13.

Popham, W. J. (1999). Where large scale educational assessment is heading and why it shouldn’t. Educational Measure-ment: Issues and Practice, 18, 13–17.

200 D.H. Slomp / Assessing Writing 13 (2008) 180–200

Resnick, L., & Resnick, D. (1992). Assessing the thinking curriculum: New tools for educational reform. In: B. R. Gifford& M. C. O’Connor (Eds.), Changing assessments: Alternative views of aptitude, achievement and instruction (pp.35–75). Boston: Kluwer Academic Publishers.

Rex, L. A., & Nelson, M. C. (2004). How teachers’ professional identities position high-stakes test preparation in theirclassrooms. Teachers College Record, 106 (6), 1288–1331.

Robinson, S. (2000). “Miles to go. . .”: Assessment and secondary English. In: B. R. C. Barrell & R. F. Hammett (Eds.),Advocating change: Contemporary issues in subject English (pp. 254–280). Toronto: Irwin Publishing.

Scott, T. (2008). “Happy to comply”: Writing assessment, fast-capitalism, and the cultural logic of control. Review ofEducation, Pedagogy, and Cultural Studies, 30 (2), 140–161.

Shepard, L. A. (1997). The centrality of use and consequences for test validity. Educational Measurement: Issues andPractice, 6, 5–8.

Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29 (7), 4–14.Shepard, L. A. (2006). Classroom assessment. In: R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 623–646).

Westport, CT: Praeger Publishers.Slomp, D. H. (2007). Trapped between paradigms: Composition pedagogy in the context of a twelfth grade standardized

writing assessment. Unpublished dissertation, University of Alberta.Smith, M. L., & Fey, P. (2000). Validity and accountability in high-stakes testing. Journal of Teacher Education, 51 (5),

334–344.Stiggins, R. J. (1999). Assessment, student confidence, and school success. Phi Delta Kappan, 81 (3), 191–198.Tashakkori, A., & Teddlie, C. (Eds.). (2003). Handbook of mixed methods in the social and behavioral sciences. Thousand

Oaks CA: Sage.Volante, L. (2006, May) Assessment, accountability, and educational reform in Ontario. Paper presented at the annual

meeting of Canadian Society for Studies in Education, Toronto, ON.Wolf, S. A., & McIver, M. C. (1998). Writing whirligigs: The art and assessments of writing in Kentucy state reform

(CSE Technical Report 496). Retrieved on October 14, 2008, http://eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp? nfpb=true& &ERICExtSearch SearchValue 0=ED428128&ERICExtSearchSearchType 0=no&accno=ED428128

Yancey, K. B. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition andCommunication, 50, 483–503.

Yen, W. M. (1998). Investigating the consequential aspects of validity: Who is responsible and what should they do?Educational Measurement: Issues and Practice, 17, 5.

David Slomp is currently a post-doctoral fellow in the Faculties of Arts and Education at the University of Alberta. In Jan-uary 2009, he will take up an assistant professor position in Language and Literacies Education at the University of Ottawa.His research interests include examining consequential validity evidence for high-stakes standardized writing assessment,ethical assessment practices, and knowledge transfer issues in secondary and post-secondary writing curriculum design.