cbt

9
The Power of Curriculum Based Testing The test is taken once a year, at the end of the year, and the results do not come until after the students start school the next year. So the turn- around for the test takes a whole year, during which time, the student may or may not get specific feedback and help concerning a test they took a year ago. For the most part, students know when a test matters and when it doesn't. And the student test scores show this. For example, in Texas, the ninth and tenth grade scores are usually pretty low, but then all of the sudden, the students get smart and the scores go way up when they get in eleventh grade. It so happens in Texas that if they do not pass the eleventh grade test, the students do not graduate. Motivation does wonders for test scores. School districts have gotten savvy to the game of testing and figure that giving tests that prep students for the big state test will help students do better. Districts have instituted what they call "benchmark" tests to determine student preparedness for the state test. These are often administered once a quarter, or in some cases, monthly. Like the state test, they are not graded and students know that doing well or poorly on the test does not affect their standing in the class. Except for some school districts that target students with interventions based on their benchmark scores, not many changes occur because of the benchmarks. How useful is this? There are useful tests. They are the ones that teachers make, and are beneficial for students -- not just teachers and administrators. They are called curriculum-based (CBA) tests and they are what teachers should be teaching to. Are the CBA's perfect? Not hardly. So that is where all of the teacher effort, principal effort and district curriculum effort should go. Rather than spending valuable time on preparing for a minimum standard state or benchmark test, teachers should focus on getting students ready to pass a CBA test.

Upload: arwina-syazwani-binti-ghazali

Post on 11-Dec-2015

212 views

Category:

Documents


0 download

DESCRIPTION

AA

TRANSCRIPT

Page 1: Cbt

The Power of Curriculum Based Testing

The test is taken once a year, at the end of the year, and the results do not come until after the students start school the next year. So the turn-around for the test takes a whole year, during which time, the student may or may not get specific feedback and help concerning a test they took a year ago.

For the most part, students know when a test matters and when it doesn't. And the student test scores

show this. For example, in Texas, the ninth and tenth grade scores are usually pretty low, but then all

of the sudden, the students get smart and the scores go way up when they get in eleventh grade. It so

happens in Texas that if they do not pass the eleventh grade test, the students do not graduate.

Motivation does wonders for test scores.

School districts have gotten savvy to the game of testing and figure that giving tests that prep

students for the big state test will help students do better. Districts have instituted what they call

"benchmark" tests to determine student preparedness for the state test. These are often administered

once a quarter, or in some cases, monthly. Like the state test, they are not graded and students know

that doing well or poorly on the test does not affect their standing in the class. Except for some school

districts that target students with interventions based on their benchmark scores, not many changes

occur because of the benchmarks. How useful is this?

There are useful tests. They are the ones that teachers make, and are beneficial for students -- not

just teachers and administrators. They are called curriculum-based (CBA) tests and they are what

teachers should be teaching to.

Are the CBA's perfect? Not hardly. So that is where all of the teacher effort, principal effort and district

curriculum effort should go. Rather than spending valuable time on preparing for a minimum standard

state or benchmark test, teachers should focus on getting students ready to pass a CBA test.

The Nuts and Bolts of CBA

The ideal teacher test is designed before instruction begins (according to McTighe & Wiggins

in Understanding by Design). Each question is correlated to a specific and prioritized student-learning

objective, and is designed to be either be easy, medium, or difficult to differentiate the test for all

students. The students are given the pre-test before instruction begins, and if all of the students pass

with 80 percent or better, then the teacher can compact the curriculum to be taught and then move on

to the next unit more quickly.

But, if full instruction is necessary, according to pre-test scores, then the value that the teacher added

will be reflected in the post-test scores minus the pre-test scores. Additionally, since both the pre- and

the post- test are correlated to the student learning standards, then in rapid fashion, a teacher can

identify specific student learning needs in a quick turn-around time, and then re-teach them in a

better, more effective way.

Page 2: Cbt

This type of ideal test can be time consuming, even though the benefits outweigh the hours spent in

creation. One way to mitigate this concern is to allow the ideal tests to be created by a team of

teachers, all agreeing to a specific standard for the correlation and the quality of each test item. This

method has the effect of holding each of the other test writers accountable for good instruction in their

perspective classrooms. It also raises the quality of the tests because more eyes are critically

analyzing each test for errors in content and format.

Education author and professor Fenwick English champions that the test that should be "taught to" is

the correctly designed and correlated CBA and not the minimum standard state test.

How do you make useful and beneficial CBA's for student?

What Is Curriculum-Based Measurement And What Does It Mean to My Child?

Curriculum-Based Measurement (CBM) is a method teachers use to find out how

students are progressing in basic academic areas such as math, reading, writing, and

spelling.

CBM can be helpful to parents because it provides current, week-by-week information

on the progress their children are making. When your child’s teacher uses CBM, he or

she finds out how well your child is progressing in learning the content for the

academic year. CBM also monitors the success of the instruction your child is

receiving – if your child’s performance is not meeting expectations, the teacher then

changes the way of teaching your child to try to find the type and amount of instruction

your child needs to make sufficient progress toward meeting the academic goals.

How Does CBM Work?

When CBM is used, each child is tested briefly each week. The tests generally last

from 1 to 5 minutes. The teacher counts the number of correct and incorrect responses

made in the time allotted to find the child’s score. For example, in reading, the child

may be asked to read aloud for one minute. Each child’s scores are recorded on a

graph and compared to the expected performance on the content for that year. The

graph allows the teacher, and you, to see quickly how the child’s performance

compares to expectations. (The figure below is an example of what a CBM graph looks

like.)

Page 3: Cbt

After the scores are entered on the graphs, the teacher decides whether to continue

instruction in the same way, or to change it. A change is called for if the child’s rate of

learning progress is lower than is needed to meet the goal for the year.

The teacher can change instruction in any of several ways. For example, he or she

might increase instructional time, change a teaching technique or way of presenting

the material, or change a grouping arrangement (for example, individual instruction

instead of small-group instruction). After the change, you – and the teacher – can see

from the weekly scores on the graph whether the change is helping your child. If it is

not, then the teacher can try another change in instruction, and its success will be

tracked through the weekly measurements.

Other Ways CBM Can Help You

CBM can also help you work more effectively with the school system on your child’s

behalf. CBM graphs make the goals, and your child’s progress, clear to you and to the

teacher. In this way, CBM can help parents and teachers communicate more

constructively.

You can use the CBM graph in conferences with teachers and administrators, as it

gives you specific information about your child’s progress and the success of the

instructional methods being used. You can also use the CBM graph in IEP

(Individualized Educational Program) meetings, to go over specific information about

your child’s current performance so that you and the school can develop measurable

goals and objectives that will lead to more meaningful progress for your child.

Page 4: Cbt

What Does Educational Testing Really Tell Us? An Interview with Daniel KoretzBy Eduwonkette on September 23, 2008 9:31 AM | 1 Comment

Daniel Koretz, a professor who teaches educational measurement at the Harvard

Graduate School of Education, generously agreed to field a few questions about

educational testing. He is the author ofMeasuring Up: What Educational Testing

Really Tells Us. 

EW: What are the three most common misconceptions about educational testing

that Measuring Up hopes to debunk? 

DK: There are so many that it is hard to choose, but given the importance of NCLB

and other test-based accountability systems, I'd choose these:

* That test scores alone are sufficient to evaluate a teacher, a school, or an

educational program.

* That you can trust the often very large gains in scores we are seeing on tests used

to hold students accountable.

* That alignment is a cure-all - that more alignment is always better, and that

alignment is enough to take care of problems like inflated scores.

EW: I'm intrigued by your third point about alignment. For example, we often hear

that because state testing systems are directed towards a particular set of standards,

we should primarily be concerned with student outcomes on tests aligned with those

standards. This is the common refrain about a "test worth teaching to." What's

missing from this argument? 

Page 5: Cbt

DK: Up to a point, alignment is a clearly good thing: we want clarity about goals, and

we want both instruction and assessment to focus on the goals deemed most

important.

However, there are two flies in the ointment. The first is that the achievement tests

are concerned with, no matter how well aligned, are small samples from large

domains of performance. That means that most of the domain, including much of the

content and skills relevant to the standards, is necessarily omitted from the test. As I

explain in Measuring Up, this is analogous to a political poll or any other survey, and

it is not a big problem under low-stakes conditions. Under high-stakes conditions,

however, there is a strong incentive to focus on the sampled content at the expense

of the omitted material, which causes score inflation. Aligned tests are not exempt.

Score inflation does not require that the test include poorly aligned content. Even if

the test is right on target, inflation will occur if the accountability program leads

people to deemphasize other material that is also important for the conclusions

based on scores. And to make this concrete: some of the most serious examples of

score inflation in the research literature were found in Kentucky's KIRIS system,

which was a standards-based testing program.

The second problem is predictability. To prepare students in a way that inflates

scores, you have to know something about the test that is coming this year, not just

the ones you have seen in the past. The content, format, style, or scoring of the test

has to be somewhat predictable. And, of course, it usually is, as anyone who has

looked at tests and test preparation materials should know. Carried too far,

alignment actually makes this problem worse, by focusing attention on the particular

way that knowledge and skills are presented in a given set of standards. Think about

'power standards,' 'eligible standards,' and 'grade level expectations,' all of which can

be labels for narrowing in on the specifics of how a set of skills appear on one state's

particular assessment.

Why is this bad? Because many of those specifics are not relevant to the students'

broader competence and long-term well-being. Scores on a test are a means to an

end, not properly an end in themselves. Education should provide students

knowledge and skills that they can use in later study and in the real world. Employers

and university faculty will not do students the favor of recasting problems to align

Page 6: Cbt

with the details of the state tests with which they are familiar. As Audrey Qualls said

some years ago: real gains in achievement require that students can perform well

when confronted with "unfamiliar particulars." Improving performance on the familiar

but not the unfamiliar is score inflation.

EW: What are the implications of score inflation for both measuring and attenuating

achievement gaps? Because schools serving disadvantaged students face more

pressure to increase test scores via the mechanisms you describe, I worry that true

achievement gaps may be unchanged - or even growing - while they appear to be

closing based on high-stakes measures.

DK: I share your worry. I have long suspected that on average, inflation will be more

severe in low-achieving schools, including those serving disadvantaged students. In

most systems, including NCLB, these schools have to make the most rapid gains,

but they also face unusually serious barriers to doing so. And in some cases, the

size of the gains they are required to make exceed by quite a margin what we know

how to produce by legitimate means. This will increase the incentive to take short

cuts, including those that will inflate scores. This would be ironic, given that one of

the primary rationales for NCLB is to improve equity. Unfortunately, while we have a

lot of anecdotal evidence suggesting that this is the case, we have very few serious

empirical studies of this. We do have some, such as the RAND study that showed

convincingly that the "Texas miracle" in the early 1990s, supposedly including a

rapid narrowing of the achievement gap, was largely an illusion. Two of my students

are currently working with me on a study of this in one large district, but we are

months away from releasing a reviewed paper, and it is only one district.

I have argued for years that one of the most glaring faults of our current educational

accountability systems is that we do not sufficiently evaluate their effects, instead

trusting - evidence to the contrary - that any increase in scores is enough to let us

declare success. We should be doing more evaluation not only because it is needed

for the improvement of policy, but also because we have an ethical obligation to the

children upon whom we are experimenting. Nowhere is this failure more important

than in the case of disadvantaged students, who most need the help of education

reform.

Page 7: Cbt

Inflation is not the only reason why we are not getting a clear picture of changes in

the achievement gap. The other is our insistence on standards-based reporting. As I

explain inMeasuring Up, relying so much on this form of reporting has been a serious

mistake for a number of reasons. One reason is that if one wants to compare change

in two groups that start out at different levels - poor and wealthy kids, African

American and white kids, whatever - changes in the percents above a standard will

always give you the wrong answer. This particular statistic confuses the amount of

progress a group makes with the proportion of the group clustered around that

particular standard, and the latter has to be different for high- and low-scoring

groups. I and others have shown that this distortion is a mathematical certainty, but

perhaps most telling is a paper by Bob Linn that shows that if you ask whether the

achievement gap has been closing, NAEP will give you different answers - very

different answers - depending on whether you use changes in scale scores, changes

in percent above Basic, or changes in percent above Proficient. This is not because

the relative progress has been different at different levels of performance; it is simply

an artifact of using percents above standards. This is only one of many problems

with standards-based reporting, but in my opinion, it is by itself sufficient reason to

return to other forms of reporting.