is a writing sample necessary for accurate placement? 33-2... · figure 1. accuplacer reading...

8
Is a Writing Sample Necessary for "Accurate Placement"? By Patrick Sullivan and David Nielsen Definitive answers about assessment practices are hard to come by. Patrick Sullivan English Department [email protected] David Nielsen Director of Planning, Research, and Assessment Manchester Community College Manchester, CT 06040 ABSTRACT: The Scholarship about assessment for placement is extensive and notoriously am- biguous. Foremost among the questions that continue to be unresolved in this scholarship is this one: Is a writing sample necessary for "ac- curate placement"? Using a robust data sample of student assessment essays and ACCUPLAC- ER test scores, we put this question to the test. For practical, theoretical, and conceptual rea- sons, our conclusion is that a writing sample is not necessary for "accurate placement." Fur- thermore our work on this project has shown us that the concept of accurate placement itself is slippery and problematic. In 2004, the Two-Year College English Associa- tion (TYCA) established a formal research com- mittee, the TYCA Research Initiative Committee, which recently designed and completed the first national survey of teaching conditions at 2-year colleges {Sullivan, 2008). The primary goal of this survey was to gather data from 2-year college English teachers about four major areas of profes- sional concern: assessment, the use of technology in the classroom, teaching conditions, and insti- tutionally designated Writing Across the Cur- riculum and Writing in the Disciplines programs. This survey gathered responses from 338 colleges nationwide and from all 50 states. The single most important unresolved question related to assessment identified in the survey was this: Is a writing sample necessary for "accurate placement"? {Sullivan, 2008, pp. 13-15). This is a question that concerns teachers of English across a wide range of educational sites, including 2-year colleges and 4-year public and private institutions. The question has also been central to the scholarship on assessment and placement for many years. The issue has im- portant professional implications, especially in terms of how assessment should be conducted to promote student success. We have attempted to engage this ques- tion about writing samples using a robust data archive of institutional research from a large, 2-year college in the northeast. The question is a complex one, and it needs to be addressed with care and patience; this study has shown that the concept of accurate placement is complex. Review of Research The scholarship about assessment for placement is extensive and notoriously ambiguous. Space limitations preclude a detailed examination of that history here, but even a cursory overview of this work shows that definitive answers about assessment practices are hard to come by. The process of designing effective assessment prac- tices has been complicated greatly by the con- flicting nature of this research. McLeod, Horn, and Haswell (2005} characterize the state of this research quite well: "Assessment, as Ed White has long reminded us, is a polarizing issue in American education, involving complex argu- ments and a lack of clear-cut answers to impor- tant questions" {p. 556). There is research that supports the use of ho- listically scored writing samples for placement (Matzen & Hoyt, 2004). There is other work that argues that writing samples are not necessary for accurate placement {Saunders, 2000). There is also research that criticizes the use of "one-shot, high-stakes" writing samples {Albertson & Mar- witz, 2001). There is even work that questions whether it is possible to produce reliable or valid results from hohstic writing samples (Belanoff, 1991; Elbow, 1996; Huot, 1990}. Additionally, the use of standardized assess- ment tools without a writing sample is reported in the literature {Armstrong, 2000; College of the Canyons, 1994; Hodges, 1990; Smittle, 1995) as well as work arguing against such a practice (Behrman, 2000; Drechsel, 1999; Gabe, 1989; Hillocks, 2002; Sacks, 1999; Whitcomb, 2002). There has even been research supporting "di- rected" or "informed" self-placement {Blakesley, 2002; Elbow, 2003; Royer & Gilles, 1998; Royer & Cilles, 2003) and work that argues against such a practice {Matzen 8f Hoyt, 2004). Clearly, this is a question in need of clarity and resolution. Methodology Participants, Institutional Site, & Background The college serves a demographically diverse stu- dent population and enrolls over 15,000 students in credit and credit-free classes each year. Eor the Fail 2009 semester, the college enrolled 7,366 stu- JOURNALo/DEVELOPMENTAL EDUCATION

Upload: others

Post on 08-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Is a Writing Sample Necessary for Accurate Placement? 33-2... · Figure 1. ACCUPLACER Reading Comprehension score vs. Local Essay score for new students Fall 2001-Fall 2005. interest;

Is a Writing Sample Necessaryfor "Accurate Placement"?By Patrick Sullivan and David Nielsen

Definitive answers aboutassessment practices arehard to come by.

Patrick SullivanEnglish Department

[email protected]

David Nielsen

Director of Planning, Research, and Assessment

Manchester Community CollegeManchester, CT 06040

ABSTRACT: The Scholarship about assessmentfor placement is extensive and notoriously am-biguous. Foremost among the questions thatcontinue to be unresolved in this scholarship isthis one: Is a writing sample necessary for "ac-curate placement"? Using a robust data sampleof student assessment essays and ACCUPLAC-ER test scores, we put this question to the test.For practical, theoretical, and conceptual rea-sons, our conclusion is that a writing sampleis not necessary for "accurate placement." Fur-thermore our work on this project has shown usthat the concept of accurate placement itself isslippery and problematic.

In 2004, the Two-Year College English Associa-tion (TYCA) established a formal research com-mittee, the TYCA Research Initiative Committee,which recently designed and completed the firstnational survey of teaching conditions at 2-yearcolleges {Sullivan, 2008). The primary goal ofthis survey was to gather data from 2-year collegeEnglish teachers about four major areas of profes-sional concern: assessment, the use of technologyin the classroom, teaching conditions, and insti-tutionally designated Writing Across the Cur-riculum and Writing in the Disciplines programs.This survey gathered responses from 338 collegesnationwide and from all 50 states.

The single most important unresolvedquestion related to assessment identified in thesurvey was this: Is a writing sample necessaryfor "accurate placement"? {Sullivan, 2008, pp.13-15). This is a question that concerns teachersof English across a wide range of educationalsites, including 2-year colleges and 4-year publicand private institutions. The question has alsobeen central to the scholarship on assessmentand placement for many years. The issue has im-portant professional implications, especially interms of how assessment should be conductedto promote student success.

We have attempted to engage this ques-tion about writing samples using a robust dataarchive of institutional research from a large,2-year college in the northeast. The question is acomplex one, and it needs to be addressed withcare and patience; this study has shown that theconcept of accurate placement is complex.

Review of ResearchThe scholarship about assessment for placementis extensive and notoriously ambiguous. Spacelimitations preclude a detailed examination ofthat history here, but even a cursory overviewof this work shows that definitive answers aboutassessment practices are hard to come by. Theprocess of designing effective assessment prac-tices has been complicated greatly by the con-flicting nature of this research. McLeod, Horn,and Haswell (2005} characterize the state of thisresearch quite well: "Assessment, as Ed Whitehas long reminded us, is a polarizing issue inAmerican education, involving complex argu-ments and a lack of clear-cut answers to impor-tant questions" {p. 556).

There is research that supports the use of ho-listically scored writing samples for placement(Matzen & Hoyt, 2004). There is other work thatargues that writing samples are not necessary foraccurate placement {Saunders, 2000). There isalso research that criticizes the use of "one-shot,high-stakes" writing samples {Albertson & Mar-witz, 2001). There is even work that questionswhether it is possible to produce reliable or validresults from hohstic writing samples (Belanoff,1991; Elbow, 1996; Huot, 1990}.

Additionally, the use of standardized assess-ment tools without a writing sample is reportedin the literature {Armstrong, 2000; College ofthe Canyons, 1994; Hodges, 1990; Smittle, 1995)as well as work arguing against such a practice(Behrman, 2000; Drechsel, 1999; Gabe, 1989;Hillocks, 2002; Sacks, 1999; Whitcomb, 2002).There has even been research supporting "di-rected" or "informed" self-placement {Blakesley,2002; Elbow, 2003; Royer & Gilles, 1998; Royer &Cilles, 2003) and work that argues against such apractice {Matzen 8f Hoyt, 2004). Clearly, this isa question in need of clarity and resolution.

MethodologyParticipants, Institutional Site, &BackgroundThe college serves a demographically diverse stu-dent population and enrolls over 15,000 studentsin credit and credit-free classes each year. Eor theFail 2009 semester, the college enrolled 7,366 stu-

JOURNALo/DEVELOPMENTAL EDUCATION

Page 2: Is a Writing Sample Necessary for Accurate Placement? 33-2... · Figure 1. ACCUPLACER Reading Comprehension score vs. Local Essay score for new students Fall 2001-Fall 2005. interest;

dents in credit-bearing classes, a full-time equiva-lent of 4,605. The institution is located in a smallcity in the northeast, and it draws students fromsuburban as well as urban districts. The averagestudent age is 25,54% of students are female, and48.5% of students attend full time (system aver-age: 35%). Approximately 33% of students enrolledin credit-bearing classes self-report as minorities(i.e.. 15% African American, 14% Latino, and 4%Asian American/Pacific Islander in Fall 2009).

For many years, the Fnglish Departmentused a locally-designed, holistically graded writ-ing sample to place students. Students were givenan hour to read two short, thetnatically-linkedpassages and then construct a multiparagraphessay in response to these readings. Instructorsgraded these essays using a locally-developedrubric with six levels, modeled after the rubricdeveloped for the New lersey College Basic SkillsPlacement Test (Results, 1985) and subsequentlymodified substantially over the last 20 years.Based on these scores students were placed into avariety of courses: (a) one of three courses in thebasic writing sequence, (b) one of four courses inthe ESL sequence, or (c) the first-year composi-tion course. As enrollment grew, however, thispractice became increasingly difficult to manage.hi 2005, for example, faculty evaluated over 1500placement essays. For much of this time, depart-mental policy required that each essay be evalu-ated by at least two different readers, and a few es-says had as many as five different readers. Despitethe care and attention devoted to this process,there continued to be "misplaced" students oncethey arrived in the classroom.

During this time, students were placed pri-marily by their placement essay scores, butstudents were also required to complete theACCUPLACER Reading Comprehension andSentence Skills tests, as mandated by state law.Ihese scores were part of a multiple-measuresapproach, but the institutional emphasis interms of placement was always on the quality ofthe student's writing sample.

ProcedureIn an attempt to find a viable, research-based so-lution to an increasingly untenable assessmentprocess, researchers examined institutional datato determine if there was any correspondencebetween the standardized test scores and the lo-cally graded v*Titing sample. There was, indeed,a statistically significant positive correlationbetween them (0.62 for 3,735 ACCUPLACERSentence Skills scores vs. local essay scores; and0.69 for 4,501 ACCUPLACER Reading Compre-hension scores vs. local essay scores). Generallyspeaking, students who wrote stronger essayshad higher ACCUPLACER scores. Or to put ita different way; Higher essay scores were gen-

Strong linear relationship

'Best fit'line for student scores

0 .•*•

4 5 6 7 8 9

ACCUPLACER Reading Comprehension

10 11 12

Placement becomes aquestion of how to matchthe skills measured by theassessment instrumentswith the curriculum.

Figure 1. ACCUPLACER Reading Comprehension score vs. Local Essay score fornew students Fall 2001-Fall 2005.

interest; knowing what a student scored on theACCUPLACER would give us no indication ofhow they would score on the writing sample.A correlation of +1 suggests that students withlow ACCUPLACER scores would also have lowwriting sample scores, and students with higherACCUPLACER scores would have high writingsample scores. A negative correlation would sug-gest that students with high scores on one testwould have low scores on the other.

The correlation coefÜcients representing thelinear relationship between ACCUPLACERscores and writing sample scores at our collegewere surprisingly strong and positive (see Fig-ure 2, p. 4). It appears that the skill sets ACC-UPLACER claims to measure in these two testsapproximate the rubrics articulated for evaluat-ing the locally-designed holistic essay, at least ina statistical sense. Therefore, placement becomesa question of how to match the skills measured bythe assessment instruments with the curriculumand the skills faculty identified in the rubrics usedto evaluate holistically scored essays. (Establish-ing cut-off scores, especially for placement intodifferent levels of our basic writing prograin, re-quired careful research, field-testing, and studyon our campus.)

erally found among students with higher ACC-UPLACER scores.

Figure 1 provides a graphical display of thisdata by showing students' scores on the ACC-UPLACER Reading Comprehension test andtheir institutional writing sample. Each pointon this scatter plot graph represents a singlestudent, and its position in the field is alignedwith the student's essay score along the verticalaxis and ACCUPLACER reading comprehen-sion score along the horizontal axis. Many datapoints clustered around a line running fromlower left to upper right-one like the dottedline in Figure l-reflect a strong linear relation-ship between reading scores and writing samplescores. Such a strong positive linear correlationsuggests that, in general, for each increase in anACCUPLACER score there should be a corre-sponding increase in an essay score. This is, infact, what the study's data shows. The "best fit"line for students' scores Is represented by thedark line in Figure 1.

A correlation is a common statistic used tomeasure the relationship between two variables.Correlation coefficients range from -1 to +1. Acorrelation coefficient of zero suggests there isno linear relationship between the variables of

DiscussionCritical ThinkingAre the reported findings simply a matter of seren-dipity, or do writing samples, reading comprehen-sion tests, and sentence skills tests have anything incommon? We argae here that they each have thecrucial element of critical thinking in common,and that each of these tests measure roughly thesame thing: a student's cognitive ability.

VOLUME 33, ISSUE 2 • Winter 2009

Page 3: Is a Writing Sample Necessary for Accurate Placement? 33-2... · Figure 1. ACCUPLACER Reading Comprehension score vs. Local Essay score for new students Fall 2001-Fall 2005. interest;

After reviewing ACCUPLACERs own de-scriptions of the reading and sentence skillstests, we found that much of what they seek tomeasure is critical thinking and logic. The read-ing test, for example, asks students to identify"main ideas," recognize the difference between"direct statements and secondary ideas," make"inferences" and "applications" from materialthey have read, and understand "sentence rela-tionships" (College Board, 2003, p, A-17), Eventhe sentence skills test, which is often dismissedsimply as a "punctuation test," in fact reallyseeks to measure a student's ability to under-stand relationships between and among ideas. Itasks students to recognize complete sentences,distinguish between "coordination" and "sub-ordination," and identify "clear sentence logic"(College Board, p. A-19; for a broad-ranging dis-cussion of standardized tests, see Achieve, Inc.,2004; Achieve, Inc., 2008).

It's probably important to mention here thatit appears to be common practice among assess-ment scholars and college English departmentpersonnel to discuss standardized tests thatthey have not taken themselves. After takingthe ACCU PLACER reading and sentence skillstests, a number of English department membersno longer identify the sentence skills section ofACCUPLACER as a "punctuation test." Ix is notan easy test to do well on, and it does indeedappear to measure the kind of critical thinkingskills that ACCUPLACER claims it does. Thisstudy focuses on ACCUPLACER because it isfamiliar as the institution's standardized assess-ment mechanism and has been adopted in thestate-wide system. Other standardized assess-ment instruments might yield similar results,but research would be needed to confirm this.

Both the reading and the sentence-skills testsrequire students to recognize, understand, andassess relationships between ideas. We wouldargue that this is precisely what a writing sample

is designed to measure. Although one test is a"direct measure" of writing ability and the othertwo are "indirect" measures, they each seek tomeasure substantially the same thing: a student'sability to make careful and subtle distinctionsamong ideas. Although standardized read-ing and sentence skills tests do not test writingability directly, they do test a student's thinkingability, and this may, in fact, be effectively andpractically the same thing.

The implications here are important. Becausehuman and financial resources are typically inshort supply on most campuses, it is importantto make careful and thoughtful choices abouthow and where to expend them. The data re-ported in this study suggest that we do not needwriting samples to place students, and movingto less expensive assessment practices could savecolleges a great deal of time and money. My col-leagues, for example, typically spent at least 5-10minutes assessing each student placement essay,and there were other "costs" involved as well,including work done on developing prompts,norming readers, and managing logistics andpaperwork. Our data suggest that it would bepossible to shift those resources elsewhere, pref-erably to supporting policies and practices thathelp students once they arrive in our classrooms.This is a policy change that could significantlyaffect student learning.

Writing Samples Come with Their OwnSet of ProblemsAlthough this "rough equivalency" may be off-putting to some, it is important to remember thatwriting samples come with their own set of prob-lems. There are certain things we have come tounderstand and accept about assessing writing inany kind of assessment situation: for placement,for proficiency, and for class work. Peter Elbow(1996) has argued, for example, that "we cannotget a trustworthy picture of someone's writing

Reading-hSentence

Sentence Skills

Reading Comprehension

0.4 0.6

Correlation

Base size varies, as they include only new first time college students who had a valid Essay scoi^ and ACCUPLACERscore; Reading Comprehension, n=4,501 ; Sentence Skills, n=3,735; Sum of n=3,734. Coefficients represent reiation-ships using Pearson Coorelation,

Figure 2. Correlations between ACCUPLACER and Local Essay for new first timecollege students Fall 200VFall 2005.

ability or skill if we see only one piece of writing-especially if it is written in test conditions. Evenif we are only testing for a narrow subskill suchas sentence clarity, one sample can be completelymisleading" (p. 84; see alsoWhite, 1993). RichardHaswell (2004) has argued, furthermore, that theprocess of holistic grading-a common practicein placement and proficiency assessment-is itselfhighly problematic, idiosyncratic, and dependenton a variety of subtle, nontransferable factors:

Studies of local evaluation of placement es-says show the process too complex to bereduced to directly transportable schemes,with teacher-raters unconsciously apply-ing elaborate networks of dozens of criteria(Broad, 2003), using fluid, overlapping cat-egorizations (Haswell, 1998), and groundingdecisions on singular curricular experience(Barritt, Stock, & Clark, 1986). (p. 4)Karen Greenberg (1992) has also noted that

teachers will often assess the same piece of writ-ing very differently:

Readers will always differ in their judgmentsof the quality of a piece of writing; there is noone "right" or "true" judgment of a person'swriting ability. If we accept that writing is amultidimensional, situational construct thatfluctuates across a wide variety of contexts,then we must also respect the complexity ofteaching and testing it. (p. 18)

Davida Charney (1984) makes a similar point inher important survey of assessment scholarship:"Under normal reading conditions, even experi-enced teachers of writing will disagree stronglyover whether a given piece of writing is good ornot. or which of two writing samples is better'(p. 67}. There is even considerable disagreementabout what exactly constitutes "college-level"writing (Sullivan & Tinberg, 2006).

Moreover, as Ed White (1995) has argued, asingle writing sample does not always accuratelyreflect a students writing ability:

An essay test is essentially a one-question test.If a student cannot answer one question on atest with many questions, that is no problem;but if he or she is stumped (or delighted) bythe only question available, scores will plum-met (or soar). . . . The essence of reliabilityfindings from the last two decades of researchis that no single essay test will yield highly reli-able results, no matter how careful the testingand scoring apparatus. If the essay test is to beused for important or irreversible decisionsabout students, a minimum of two separatewriting samples must be obtained-or someother kind of measurement must be combinedwith the single writing score, (p. 41)

CONTINUED ON PAGE 6

JOURNAL of DEVELOPMENTAl EDUCATION

Page 4: Is a Writing Sample Necessary for Accurate Placement? 33-2... · Figure 1. ACCUPLACER Reading Comprehension score vs. Local Essay score for new students Fall 2001-Fall 2005. interest;

CONTINUED FROM PAGE 4

Clearly, writing samples alone come withtheir own set of problems, and they do not nec-essarily guarantee "accurate" placement. Keep-ing all this in mind, we would now like to turnour attention to perhaps the most problematictheoretical question about assessment for place-ment, and the idea that drives so much thinkingand decision-making about placement practic-es, the concept of "accurate placement."

What Is "Accurate Placement"?The term "accurate placement" is frequentlyused in assessment literature and in informaldiscussions about placement practices. Howev-er, the term has been very hard to define, and itmay ultimately be impossible to measure.

Much of the literature examining the ef-fectiveness of various placement tests is basedon research that ties placement scores to classperformance or grades. Accurate placement, inthese studies, is operationally defined by gradesearned. If a student passes the course, it is as-sumed that they were accurately placed,

However, other studies have identified sev-eral factors that are better predictors of studentgrades than placement test scores. These includethe following:

^ a student's precollege performance, in-cluding High School GPA, grade in lasthigh school English class, and number ofEnglish classes taken in high school (Hoyt& Sorensen, 2001, p. 28; Armstrong, 2000,p. 688);

»̂ a student's college performance in firstterm, including assignment completion,presence of adverse events, and usage ofskill labs (i.e., tutoring, writing center, etc.;Matzen & Hoyt, 2004, p. 6);

'^ the nonacademic influences at play in astudents life, including hours spent work-ing and caring for dependents and stu-dents' "motivation" (see Armstrong, 2000,p. 685-686);

"̂ and, finally, "between class variations,"including the grading and curriculum dif-ferences between instructors (Armstrong,2000, pp. 689-694).

How do we know if a student has been "accu-rately" placed when so many factors contrib-ute to an individual student's success in a givencourse? Does appropriate placement advice leadto success or is it other, perhaps more important,factors like quality of teaching, a student's workethic and attitude, the number of courses a stu-dent is taking, the number of hours a student isworking, or even asymmetrical events like fam-ily emergencies and changes in work schedules?

We know from a local research study, for ex-

30-40(n=27)

ACCUPLACER Reading Comprehertsion Score Range and Number of Students in Data Poinl

Figure 3,Fall 2001

New students placed into upper- level Developmental EnglishFall 2005, pass rate by ACCUPLACER Reading Comprehension scores.

ample, that student success at our communitycollege is contingent on all sorts of "external"factors. Students who are placed on academic

Effectiveness of variousplacement tests is based onresearch that ties placementscores to class performance.

probation or suspension and who wish to con-tinue their studies are required to submit to ourDean of Students a written explanation of theproblems and obstacles they encountered andhow they intend to overcome them. A contentanalysis of over 750 of these written explanationscollected over the course of 3 years revealed acommon core of issues that put students at risk.What was most surprising was that the majorityof these obstacles were nonacademic.

Most students reported getting into troubleacademically for a variety of what can be de-scribed as "personal" or "environmental" rea-sons. The largest single reason for students be-ing placed on probation (44%) was "Issues inPrivate Life." This category included personalproblems, illness, or injury; family illness; prob-lems with childcare; or the death of a familymember or friend. The second most commonreason was "Working Too Many Hours" (22%)."Lack of Motivation/Unfocused" was the thirdmost common reason listed (9%). "Difficultieswith Transportation" was fifth (5%). Surpris-ingly. "Academic Problems" (6%) and being"Unprepared for College" (4%) rated a distant4th and 6th respectively (Sullivan, 2005, p. 150).

In sum, then, it seems clear that placementis only one of many factors that influence a

students performance in their first-semesterEnglish classes. A vitally important questionemerges from these studies: Can course gradesdetermine whether placement advice has, in-deed, been "accurate"?

Correlations Between TestScores and GradesTo test the possible relationship between "ac-curate placement" and grades earned, we ex-amined a robust sample of over 3,000 studentrecords to see if their test scores had a linearrelationship with grades earned in basic/devel-opmental English and college-level compositionclasses. We expected to find improved pass ratesas placement scores increased. However, that isnot what we found.

Figure 3 displays the course pass rates of stu-dents placed into English 093, the final basicwriting class in the three-course basic writingsequence. This particular data set illustrates ageneral pattern found throughout the writingcurriculum on campus. This particular data set isused because this is the course in the curriculumwhere students make the transition from basicwriting to college-level work. Each column onthe graph represents students' ACCUPLACERReading scores summed into a given lo-poinlrange (i.e., 50-60, 60-70, etc.). The height of eachbar represents the share of students with scoresfalling in a given range who earned a C or betterin English 093.

For the data collected for this particular co-hort of students enrolled in the final class of abasic writing sequence, test scores do not havea strong positive relationship with grades. Thereis. in fact, a weak positive linear correlation be-tween test scores and final grades earned (usinga 4-point scale for grades that includes decimal

CONTINUED ON PAGE 8

JOURNAL (./DEVELOPMENTAL EDUCATION

Page 5: Is a Writing Sample Necessary for Accurate Placement? 33-2... · Figure 1. ACCUPLACER Reading Comprehension score vs. Local Essay score for new students Fall 2001-Fall 2005. interest;

CONTINUED FROM PAGE 6

values between the whole-number values). Innumeric terms, the data produces correlationcoefficients in the o.io to 0.30 range for mostcourse grade to placement test score relation-ships. This appears to be similar to what otherinstitutions find (see James, 2006, pp. 2-3).Further complicating the question of "accurateplacement" is the inability to know how a stu-dent might have performed had they placed intoa different class with more or less difficult ma-terial. Clearly, "accurate placement" is a prob-lematic concept and one that may ultimately heimpossible to vaUdate,

Recommendations for PracticeAssessment for Placement Is Not anExact Scienceit seems quite clear that we have much to gainfrom acknowledging that assessment for place-ment is not an exact science and that a rigidfocus on accurate placement may be self-defeat-ing. The process, after all, attempts to assess acomplex and interrelated combination of skillsthat include reading, writing, and thinking. Thisis no doubt why teachers have historically pre-ferred "grade in class" to any kind of "one-shot,high stakes" proficiency or "exit" assessment{Alhertson & Marwitz, 2001). Grade in classis typically arrived at longitudinally over thecourse of several months, and it allows studentsevery chance to demonstrate competency, oftenin a variety of ways designed to accommodatedifferent kinds of learning styles and studentpreferences. It also provides teachers with theopportimity to be responsive to each student'sunique talents, skills, and academic needs."Grade earned" also reflects student disposition-al characteristics, such as motivation and workethic, which are essential to student success andoften operate independently of other kinds ofskills (Duckworth & Seligman, 2005).

Keeping all this in mind, consider what in-dividual final grades might suggest about place-ment. One could make the argument, for ex-ample, that students earning excellent gradesin a writing class (B-H, A-, A) were misplaced,that the course was too easy for them, and thatthey could have benefited from being placedinto a more challenging class. One might alsoargue, conversely, using precisely these samefinal grades, that these students were perfectlyplaced, that they encountered precisely the rightlevel of challenge, and that they met this chal-lenge in exemplary ways.

One coiild also look at lower grades andmake a similar argument: Students who earnedgrades in the C-/D-H/D/D- range were not well

placed because such grades suggest that thesestudents struggled with the course content.Conversely, one could also argue about thissame cohort of students that the challenge wasappropriate because they did pass the class, afterall, although admittedly with a less than averagegrade. Perhaps the only students that might besaid to be "accurately placed" were those whoearned grades in the C/C+/B-/B range, suggest-ing that the course was neither too easy nor toodifficult. Obviously, using final grades to assessplacement accuracy is highly problematic. Stu-dents fail courses for all sorts of reasons, andthey succeed in courses for all sorts of reasonsas well.

Teacher VariabilityAccurate placement also assumes that there willalways and invariably be a perfect curricular andpedagogical match for each student, despite thefact that most English departments have largestaffs with many different teachers teaching the

"Grade earned" also reflectsstudent dispositionalcharacteristics...whichare essential to studentsuccess and often operateindependently of otherkinds of skills.

same course. Invariably and perhaps inevita-bly, these teachers use a variety of different ap-proaches, theoretical models, and grading ru-brics to teacb the same class. Although coursescarry identical numbers and course titles andare presented to students as different sectionsof the same course, they are, in fact, seldomidentical (Armstrong, 2000). Problems relatedto determining and validating accurate place-ment multiply exponentially as one thinks ofthe many variables at play here. As Armstrong(2000) has noted, the "amount of variance con-tributed by the instructor characteristics wasgenerally 15%-2O% of the amount of variancein final grade. This suggested a relatively highdegree of variation in the grading practices ofinstructors" (p. 689).

One "Correct" Placement ScoreAchieving accurate placement also assumes thatthere is only one correct score or placement ad-vice for each student and that this assessment canbe objectively determined with the right place-ment protocol. The job of assessment profession-

als, then, is to find each student's single, correctplacement score. This is often deemed especiallycrucial for students who place into hasic writingclasses because there is concern that such stu-dents will be required to postpone and attenuatetheir academic careers by spending extra time inEnglish classes before they will be able to enroll incollege-level, credit-bearing courses.

But perhaps rather than assume the needto determine a single, valid placement scorefor each student, it would be better to theorizethe possibility of multiple placement optionsfor students that would each offer important(if somewhat different) learning opportunities.This may be especially important for basic writ-ers, who often need as much time on task andas much classroom instruction as they can gel(Sternglass, 1997; Traub, 1994; Traub, 2000).Furthermore, there are almost always ways formaking adjustments for students who are egre-giously misplaced (moving them up to a moreappropriate course at the beginning of the se-mester, or skipping a level in a hasic writing se-quence based on excellent work, for example).

Obviously, assessing students for placementis a complex endeavor, and, for better or worse.it seems clear that educators must accept thaifact. As Ed White (1995) has noted, "the fact isthat all measurement of complex ability is approximate, open to question, and difficult to accomplish. . . . To state that both reliahility andvalidity are problems for portfolios and essaytests is to state an inevitable fact about all as-sessment. We should do the best we can in animperfect world" (p. 43).

Economic and Human ResourcesTwo other crucial variables to consider in seek-ing to develop effective, sustainable assessmentpractices are related directly to local, pragmaticconditions: (a) administrative support and (b)faculty and staff workload. Recent statementsfrom professional organizations like the Con-ference on College Composition and Commu-nication and the National Council of Teachersof English, for example, provide scholarly guid-ance regarding exactly how to design assessmentpractices should it be possible to construct themunder ideal conditions (Conference, 2006; El-bow, 1996; Elliot, 2005; Huot, 2002; National,2004), There seems to be little debate about whatsuch practices would look like given ideal con-ditions. Such an assessment protocol for place-ment, for example, would consider a varietyof measures, including the following: (a) stan-dardized test scores and perhaps also a timed,impromptu essay; (b) a portfolio of the studentswriting; (c) the students high school transcripts;

CONTINUED ON PAGE 10

JOURNAL o/DEVELOPMENTAL EDUCATION

Page 6: Is a Writing Sample Necessary for Accurate Placement? 33-2... · Figure 1. ACCUPLACER Reading Comprehension score vs. Local Essay score for new students Fall 2001-Fall 2005. interest;

CONTINUED FROM PAGE 8

(d) recommendations from high school teachersand counselors; (e) a personal interview with acollege counselor and a college English profes-sor; and (f) the opportunity for the student todo some self-assessment and self-advocating interms of placement.

Unfortunately, most institutions cannot offerEnglish departments the ideal conditions that aplacement practice like this would require. Infact, most assessment procedures must be de-veloped and employed under less than ideal con-ditions. Most colleges have limited human andfinancial resources to devote to assessment forplacement, and assessment protocol will alwaysbe determined by constraints imposed by time.finances, and workload. Most colleges, in fact,must attempt to develop theoretically sound andsustainable assessment practices within verycircumscribed parameters and under consider-able financial constraints. This is an especiallypressing issue for 2-year colleges because thesecolleges typically lack the resources commonat 4-year institutions, which often have writingprogram directors and large assessment budgets.

A Different Way to Theorize PlacementSince assessment practices for placement arecomplicated by so many variables (includingstudent dispositional, demographic, and situ-ational characteristics as well as instructor vari-ation, workload, and institutional support), webelieve the profession has much to gain by mov-ing away from a concept of assessment basedon determining a "correct" or "accurate" place-ment score for each incoming student. Instead,we recommend conceptualizing placement in avery different way: as simply a way of groupingstudents together with similar ability levels. Thiscould be accomplished in any number of ways,witb or without a writing sample. The idea hereis to use assessment tools only to make broaddistinctions among students and their ability tobe successful college-level readers, writers, andthinkers. This is how many assessment profes-sionals appear to think they are currently beingused, but the extensive and conflicted scholar-ship about assessment for placement (as well asanecdotal evidence that invariably seems to sur-face whenever English teachers discuss this sub-ject) suggests otherwise. This conceptual shifthas the potential to be liberating in a number ofimportant ways. There may be much to gain byasking placement tools to simply group studentstogether by broadly deñned ability levels thatmatch curricular offerings.

Such a conception of placement would allowus to dispense with the onerous and generallythankless task of chasing after that elusive, Pla-

tonically ideal placement score for every matric-ulating student, a practice which may, in the end,be cbimerical. Such a conception of assessmentwould free educators from having to expend somuch energy and worry on finding the one cor-rect or accurate placement advice for each incom-ing studeni, and it would allow professionals theluxury of expending more energy on what shouldno doubt be the primary focus: What happens tostudents once they are in a classroom after receiv-ing their placement advice.

We advocate a practice here that feels muchless overwrought and girded by rigid bound-aries and "high stakes" decisions. Instead, wesupport replacing this practice with one that issimply more willing to acknowledge and acceptthe many provisionalities and uncertainties thatgovern the practice of assessment. We believesuch a conceptual shift would make the processof assessing incoming students less stressful, lesscontentious, and less time- and labor-intensive.

The practice of using astandardized assessmenttool for placement may givecolleges one practical wayto proceed that is backed byresearch.

For all sorts of reasons, this seems like a concep-tual sbift worth pursuing.

Furthermore, we recommend an approach toassessment that acknowledges four simple facts:

1. Most likely, there will always be a needfor some sort of assessment tool to assessincoming students.

2. All assessment tools are imperfect.

3. The only available option is to work withthese assessment tools and the imperfec-tions they bring with them.

4. Students with different "ability profiles"will inevitably test at tbe same "abilitylevel" and be placed within the same class.Teachers and students will have to addressthese finer differences within ability pro-files together within each course and class.

Such an approach would have many advan-tages, especially when one considers issues re-lated to workload, thoughtful use of limited hu-man and economic resources, and colleges withvery large groups of incoming students to assess.The practice of using a standardized assessmenttool for placement may give colleges one practi-

cal way to proceed that is backed by research.Resources once dedicated to administering andevaluating writing samples for placement couldinstead be devoted to curriculum development,assessment of outcomes, and student support.

ConclusionWe began this research project witb a questionthat has been central to the scholarship on assessment and placement for many years: Is awriting sample necessary for accurate place-ment? It is our belief that we can now answerthis question with considerable confidence: Awriting sample is not necessary for accurate place-ment. This work supports and extends recentresearch and scholarship (Armstrong, 2000; Be-lanofï, 1991; Haswell, 2004; Sullivan, 2008; Saun-ders, 2000). Furthermore, and perhaps moreimportantly, our work indicates that the conceptof accurate placement, which has long been rou-tinely used in discussions of assessment, shouldbe used with great caution, and then only witha full recognition of the many factors tbat complicate tbe ability to accurately predict and measure student acbievement.

Finally, we support the recent Conferenceon College Composition and Communication(College Composition and CommunicationC)"Position Statement on Writing Assessment"(2006). We believe tbat local history and condi-tions must always be regarded as important fac-tors whenever discussing assessment practicesat individual institutions.

ReferencesAchieve, Inc. (2004). Ready or not, Creating a high

school diploma that counts. Retrieved fromhttp,//www.acbieve.org/node/552

Achieve, Inc. (2008). Closing the expectations gap2008. Retrieved from http,//www.achieve.org/node/990

Albertson, K., & Marwitz, M. (2001). The silentscream. Students negotiating timed writingassessments. Teaching English at the Two-YearCollege, 29(2}, 144-53-

Armstrong, W. B. (2000). The association amongstudent success in courses, placement tesiscores, student background data, and instruc-tor grading practices. Community CollegeJournal of Research and Practice, 24(8), 681-91.

Barritt, L., Stock, P. T., & Clark, F. (1986). Researching practice. Evaluating assessmentessays. College Composition and Communica-tion, 37(3), 315-327-

Belanoff, P, (1991)- The myths of assessment. Jour-nal of Basic Writing, 10(1), 54-67.

Berhman, E. (2000). Developmental placementdecisions, Content-Specific reading assessment.Journal of Developmental Education, 23(3), 12-18.

10 JOURNAL 0/DEVELOPMENTAL EDUCATION

Page 7: Is a Writing Sample Necessary for Accurate Placement? 33-2... · Figure 1. ACCUPLACER Reading Comprehension score vs. Local Essay score for new students Fall 2001-Fall 2005. interest;

íílakesley, D. (2002} Directed self-placement inthe university. WPA, Writing Program Admin-istration, 25(2), 9-39.

Broad, B. (2003). What we really value. Beyond ru-brics in teaching and assessing writing. Logan,UT: Utah State University Press.

Charney, D. (1984). The validity of using holisticscoring to evaluate writing, A critical overview.Research in the Teaching of English, i8{i), 65-81.

College of the Canyons Office of InstitutionalDevelopment. (1994). Predictive validity studyof the APS Writing and Reading Tests [and]validating placement rules of the APS WritingTest. Valencia, CA, Office of Institutional De-velopment for College of the Canyons, 1994.Retrieved from ERIC database. (ED 376915)

College Board. (2003). ACCUPLACER onlinetechnical manual. Retrieved from www.south-texascollege.edu/~research/Math%20place-ment/TechManual.pdf

t,onference on College Composition and Com-munication. (2006). Writing assessment, A po-sition statement. Retrieved from http://www.ncte.org/College Composition and Commu-nicationc/resources/positions/i23784.htm

Drechsel, ]. (1999). Writing into silence. Los-ing voice with writing assessment technol-ogy. Teaching English at the Two-Year College,

20(4). 380-387Duckworth, A., & Seligman, M. (2005). Self-

discipline outdoes IQ in predicting academicperformance of adolescents. Psychological Sci-ence, J6(I2), 939-944.

Elbow, R (1996). Writing assessment in the twen-ty-first century, A Utopian view. In L. Bloom,D. Daiker, & E. White (Eds.), Composition inthe 21st Century, Crisis and change (pp. 83-100). Carbondale, Southern Illinois UniversityPress 83-100,

Elbow, R (2003). Directed self-placement in rela-tion to assessment, Shifting the crunch fromentrance to exit. In D. J. Royer & R. Gilles(Eds.), Directed self-placement. Principles andpractices (pp. 15-30). Carbondaie, IL, SouthernIllinois University Press.

Elliot, N. (2005). On a scale, A social history ofwriting assessment in America. New York, NY:Peter Lang.

Hreedman, S. W. (19S1). Influences on evaluatorsof expository essays. Beyond the text. Researchin the Teaching of English, 15(3), 245-255.

Gabe. LC. (1989). Relating college-level courseperformance to ASSET placement scores (In-stitutional Research Report RR89-22). FortLauderdale, FL: Broward Community College.Retrieved from ERIC database. (ED 309823)

Greenberg, K. (1992, Fall/Winter). Validity andreliability issues in the direct assessment ofwriting. WPA, Writing Program Administra-tion, 16, 7-22.

Haswell, R. H. (1991). Gaining ground in collegewriting. Tales of development and interpreta-tion. Dallas, TX, Southern Methodist Univer-sity Press.

Haswell, R. H. (2004). Post-secondary entrancewriting placement, A brief synopsis of research.Retrieved from http,//V™TV. depts.drew.edu/compositlon/Placement2oo8/Haswell-place-ment.pdf

Hillocks, G. (2002). The testing trap. How statewriting assessments control learning. New York,NYTeachers College Press.

Hodges, D. (1990). Entrance testing and studentsuccess in writing classes and study skills class-es. Fall ¡989, Winter 1990, and Spring 1990.Eugene, OR, Lane Community College. Re-trieved from ERIC database. (ED 331555)

Huot, B. (1990). Reliability, validity, and holisticscoring, Wbat we know and what we need toknow. College Composition and Communica-tion. 41(2), 201-213.

Huot, B. (2002). (Re)Articulating writing assess-ment for teaching and ¡earning. Logan, UT,Utah State University Press.

Hoyt, J.E., & Sorensen, C.T. (2001). High schoolpreparation, placement testing, and collegeremediation. Journal of Developmental Educa-tion, 25(2), 26-34.

lames, C. L. (2006). ACCUPLACER OnLine, Ac-curate placement tool for developmental pro-grams? tournai of Developmental Education,30(2), 2-8.

Matzen, R. N., & Hoyt, 1. E. (2004), Basic writ-ing placement with holistically scored essays.Research evidence. Journal of DevelopmentalEducation, 28(1), 2-4+.

McLeod, S., Horn, H., & Haswell, R. H. (2005).Accelerated classes and the writers at the bot-tom, A local assessment story. College Compo-sition and Communication, 56(4), 556-580.

National Council of Teachers of English Execu-tive Committee. (2004). Framing statementson assessment. Revised report of the assessmentand testing study group. Retrieved from http,//www. ncte.org/about/over/posit ions/category/assess/118875.htm.

New Jersey Department of Higher Education.{1984). Results of the New fersey College BasicSkills Placement Testing. Report to the Boardof Higher Education. Trenton, NJ: Author. Re-trieved from ERIC database. (ED 290390)

Royer, D. U & Gilles, R. (1998). Directed self-placement. An attitude of orientation. CollegeComposition and Communication, 50(1), 54-70.

Royer, D. ]., & Gilles, R. (Eds.). (2003). Directedself-placement. Principles and practices. Car-bondale, IL, Southern Illinois University Press.

Sacks, P. (1999). Standardized minds. The highprice of America's resting culture and what wecan do about it. Cambridge, MA, Perseus.

Saunders, P. I. (2000). Meeting the needs of enter-ing students through appropriate placement inentry-level writing courses. Saint Louis, MO:Saint Louis Community College at Forest Park.Retrieved from ERIC database. (ED447505)

Smittle, P (1995). Academic performance predic-tors for community college student assess-ment. Community College Review, 23(20), 37-43.

Sternglass, M. (1997). Time to know them, A lon-gitudinal study of writing and learning at thecollege level. Mahwah, NJ, Lawrence EribaumAssociates.

Sullivan, P. (2005). Cultural narratives about suc-cess and the material conditions of class at thecommunity college. Teaching English at theTwo-Year College, 33(2), 142-160.

Sullivan, P. (2008). An analysis of the nationalTYCA Research Initiative Survey, section II,Assessment practices in two-year college Eng-lish programs. Teaching English at the Two-Year College, j)6(i). 7-26.

Sullivan, P., &Tinberg, H. (Eds.). (2006). What is"college-tevel" writing? Urbana, IL, NCTE.

Traub, J. (1994). City on a hilt. Testing the Ameri-can dream at city college. New York, Addison,1994.

Traub, J. (2000, January 16). What no school cando. The New York Times Magazine, 52.

Whitcomb, A.]. (2002). Incremental validity andplacement decision effectiveness. Placementtests vs. high school academic performancemeasures. Doctoral Dissertation, Universityof Northern Colorado. Abstract from UMIProQuest Digital Dissertations. DissertationAbstract Item, 3071872.

White. E. M. (1993). Holistic scoring, Past tri-umphs, future challenges. In M. Williamson &Brian A. Huot (Eds.), Validating holistic scor-ing for writing assessment, Vuoretical and em-pirical foundations (pp. 79-106). Cresskill, NJ,Hampton.

White, E. M. (1994).Teaching and assessing writing(2nd ed). San Francisco. Jossey-Bass.

White, E. M. (1995). An apologia for the timedimpromptu essay test. College Compositionand Communication, 46(1), 30-45.

Williamson. M. M., & Huot. B. (1993). Validatingholistic scoring for writing assessment. Theoreti-cal and empirical foundations. Cresskill, NJ,Hampton.

Zak, E, & Weaver, C. C. (Eds.). (1998). The theoryand practice of grading writing. Albany, NY:State University of New York Press. Q

VOLUME 33, ISSUE 2 • Winter 2009 11

Page 8: Is a Writing Sample Necessary for Accurate Placement? 33-2... · Figure 1. ACCUPLACER Reading Comprehension score vs. Local Essay score for new students Fall 2001-Fall 2005. interest;

Copyright of Journal of Developmental Education is the property of Appalachian State University, d.b.a. the

National Center for Developmental Education and its content may not be copied or emailed to multiple sites or

posted to a listserv without the copyright holder's express written permission. However, users may print,

download, or email articles for individual use.