instructional level 1 running head: … level 1 running head: instructional level for reading...

26
Instructional Level 1 Running head: INSTRUCTIONAL LEVEL FOR READING INTERVENTIONS Using the Instructional Level as a Criterion to Target Reading Interventions Matthew K. Burns David C. Parker Minnesota Center for Reading Research University of Minnesota

Upload: ngohanh

Post on 27-Apr-2018

227 views

Category:

Documents


2 download

TRANSCRIPT

Instructional Level 1

Running head: INSTRUCTIONAL LEVEL FOR READING INTERVENTIONS

Using the Instructional Level as a Criterion to Target

Reading Interventions

Matthew K. Burns David C. Parker

Minnesota Center for Reading Research

University of Minnesota

Instructional Level 2

Abstract

The instructional hierarchy offers a useful framework to target academic interventions. Within

this framework, the accuracy with which a student reads might function as an indicator that the

student should receive an intervention that focuses either on accuracy or fluency. The current

study examined whether the instructional level for reading (93% to 97% of words read correctly)

could be used to target interventions that first facilitated accurate responding, and subsequently

facilitated faster rates of fluency growth. Each of three third-grade students had faster growth

rates in the second phase of a fluency-focused reading intervention following an intervention that

resulted in the students reading at least 93% of the words correctly. Implications and limitations

of the current study are discussed for applying the instructional hierarchy and the instructional

level to target reading interventions.

Instructional Level 3

Using the Instructional Level as a Criterion to Target Reading Interventions

Recent research has led to a better understanding of effective interventions (Burns &

Ysseldyke, 2009) and assessment technology that can directly test potential interventions (Daly,

Martens, Dool & Hintze, 1998; Jones & Wickstrom, 2002). However, even a well-researched

intervention will not be effective if it does not address the core problem. The instructional

hierarchy has become a commonly-used framework for intervention design (Ardoin & Daly,

2007) in which interventions are identified by matching student skill with one of four phases of

student learning (Haring & Eaton, 1978). Although the hierarchy was proposed over 30 years

ago, it was not used for intervention research until Daly, Lentz, and Boyer (1996) discussed it as

a conceptual model for better understanding reading interventions. Moreover, the instructional

hierarchy could provide a potential tool for correctly targeting reading interventions (Burns,

VanDerHeyden, & Boice, 2008).

According to Haring and Eaton (1978), acquisition of new skills is the first phase of the

instructional hierarchy and is characterized by slow and inaccurate performance. A student in the

acquisition phase would require high modeling and immediate feedback. After a student

becomes accurate, she is operating in the proficiency phase and would likely respond to

repetition and over-learning procedures to perform the skill more quickly. Once the student

performs the skill with accuracy and sufficient speed, she then can generalize the newly learned

information to different settings (phase three – generalization) and can apply it to different

situations to solve problems (phase four – adaption). The accuracy with which the task is

completed is a relevant variable in the instructional hierarchy because two students with similar

speed of responding may require different interventions based on their respective levels of

accuracy. A student with low accuracy would need an intervention that targets accurate

Instructional Level 4

responding, but a student with high accuracy would need an intervention that targets speed of

correct responding.

Although Haring and Eaton (1978) make clear that the focus of acquisition interventions

should be increasing accuracy, they do not discuss what level of accuracy determines when a

student has progressed to the proficiency phase. The instructional level could serve as a potential

criterion to switch reading intervention focus from acquisition to proficiency. Gickling and

Thompson (1985) operationally defined Bett’s (1946) concept of an instructional level for

reading by proposing that students were optimally challenged when they read 93% to 97% of the

words correctly. Reading interventionists use the instructional level to assess the accuracy with

which a skill is completed and compare that level (as determined by the percentage of items

correctly completed) to an instructional level criterion of 93% to 97% known words. Students

who read fewer than 93% of the words correctly from contextual reading are operating at a

frustration level and those who read more than 97% correct are at an independent level. When

students read at the instructional level, they experience several positive outcomes, such as

enhanced reading fluency (Burns, 2007), comprehension (Gickling & Armstrong, 1978), and

time on task (Burns & Dean, 2005; Gickling & Armstrong, 1978; Treptow et al., 2007).

Previous instructional level intervention research focused on reading fluency, which is

often used as a target in reading intervention studies. The terms fluency and proficiency require

clear differentiation when applying the instructional hierarchy to reading as an intervention

heuristic. The National Reading Panel (2000) has identified five broad subskills possessed by

skilled readers. Of these, reading fluency, or the ability to read text quickly and accurately with

good expression, is a key subskill that often requires intervention as reading first develops. In

this sense, reading fluency refers specifically to the skill of accurately and rapidly decoding

Instructional Level 5

written text. Proficiency refers to the general ability to perform a skill accurately and rapidly and

to the specific stage of the instructional hierarchy.

Reading fluency appears to be an appropriate target for reading interventions because it

can be measured reliably and sensitively with curriculum-based measurement of oral reading

fluency (CBM-R; Deno, 1985) and is a strong indicator of total reading competence (Fuchs,

Fuchs, Hosp, & Jenkins, 2001). There is a moderate to large correlation between CBM-R data

and comprehension (Berninger, Abbot, Vermeulen, & Fulton, 2006; Burns et al., 2002; Jenkins,

Fuchs, van den Broek, Espin, & Deno, 2003; Samuels, 1979), with unique variance being

attributed to CBM-R (Spear-Swerling, 2006). Moreover, previous research has effectively used

reading fluency interventions to increase reading comprehension (Alber-Morgan, Ramp,

Anderson, & Martin, 2007; Burns, Dean, & Foley, 2004; Freeland, Skinner, Jackson, McDaniel,

& Smith, 2000).

There are a large number of research-based interventions that directly address oral

reading fluency deficits (National Reading Panel, 2000). Two commonly-used approaches to

reading fluency interventions are repeated reading (RR; Moyer, 1982; Rashotte & Torgeson,

1985; Samuels, 1979) and the supported cloze procedure (SCP; Rasinski, 2003). According to

Samuels (1979), repeated reading ―consists of re-reading a short and meaningful passage until a

satisfactory level of fluency is reached‖ (p. 404). Thus, students receiving a repeated reading

intervention engage in high amounts of practice with a targeted goal of increasing the rate and

accuracy with which they read. In a review of several studies on the effectiveness of repeated

reading interventions, Therrien (2004) found it to be effective for students both with learning

disabilities (effect sizes range of .41 to .79) and without learning disabilities (range of .18 to .85),

with a mean effect size of .50 for generalization passages.

Instructional Level 6

The supported cloze procedure (SCP; Rasinski, 2003) is an assisted intervention in which

an instructor reads a passage jointly with a student. There are several variants of the SCP

intervention, but a typical approach involves the student and interventionist reading every other

word in a given passage, and then starting over while switching the words read such that each

word is both modeled for and read by the student. Thus, SCP interventions specifically target

reading accuracy by modeling correct reading of words in the passage. Interventions employing

SCP are also effective for improving skills such as fluency and comprehension (Homan, Klesius,

& Hite, 1993; Kuhn & Stahl, 2003).

Although SCP and repeated reading interventions are effective in terms of facilitating

reading fluency and comprehension (Homan et al., 1993; Kuhn & Stahl, 2003; Therrien, 2004),

the two types of interventions map onto the instructional hierarchy at different levels. Accuracy-

focused interventions (i.e., SCP) target the acquisition stage of the instructional hierarchy,

whereas interventions that focus on the speed of accurate responding (i.e., repeated reading)

target the proficiency stage. According to the instructional hierarchy, speed-focused

interventions might not be as effective when a student is still in the accuracy stage of a learning

task, because they have not yet acquired the skills to accurately read words. Thus, it might be

possible to target reading interventions more appropriately by considering the accuracy with

which students read, which is consistent with a need to target empirically-tested interventions in

a way that functionally matches the skill problem (Daly et al., 1996). Previous research for math

found that accuracy-focused interventions were more effective for students whose skills fell

within the acquisition phase than those whose skills fell within the proficiency phase (Burns,

Codding, Boice, & Lukito, 2010), but this assumption has not been tested for reading.

Instructional Level 7

The purpose of the current study was to extend instructional hierarchy research, and to

examine the relationship between an evidence-based intervention and a specific reading problem.

Thus, we examined whether reading fluency interventions would be more effective if they were

implemented after successfully building reading accuracy to an instructional level. The following

research question guided the study: What is the effect of a speed-based intervention on reading

fluency after reaching an instructional level via a high modeling intervention?

Method

Participants and Setting

Three third-grade students with reading difficulties were the participants for the study.

The first student was a White female named Julie (pseudonym), the second was a White male

named Ryan (pseudonym) and the third was a White male named Eric (pseudonym). All three

students attended the same elementary school in Minnesota and were participants in the school’s

AmeriCorps reading program. Students who are eligible for the AmeriCorps reading program

must (a) score below the 25th

percentile on the Measures of Academic Progress (Northwest

Evaluation Association, 2003), and (b) score below grade level expectations on the fall

benchmark assessment for oral reading fluency (i.e., a median score of 76 words read correctly

per minute or less on CBM-R benchmark assessment). The three current participants were

selected because they were also reading below the instructional level of 93% accuracy and had

been chosen to receive repeated reading as an intervention.

The school that the students attended served 850 students in grades 2 through 5 and was

located in a rural community. Demographic data for the students who attended the school

included 91% who were Caucasian and 28% who were eligible for the federal free or reduced

price lunch program.

Instructional Level 8

All interventions and assessments were conducted by a trained literacy tutor as part of the

Minnesota Reading Corps AmeriCorps program. The tutor was a Caucasian female who

participated in 2 days of training regarding the interventions and CBM-R before beginning the

interventions. The students participated in the intervention once every school day for 20 minutes

each session. The intervention sessions were delivered one-on-one in individual cubicles with the

student and tutor seated across from each other at a small table.

Materials

The intervention materials consisted of third-grade passages from the Dynamic Indicators

of Basic Early Literacy Skills (DIBELS; Good & Kaminski, 2002) assessment system. Each

participant received intervention using the first passage in the DIBELS sequence on the first

intervention day. On subsequent days, participants received intervention with the next passages

in the sequence. If students received intervention long enough for each passage to be used, one

passage was randomly selected for each subsequent intervention session to be used a second

time. These passages were not returned to the pool to be selected again. Thus, each passage

may have been used up to two times.

Measures

The effectiveness of the interventions was measured once each week with CBM-R. Each

week, a new grade-level passage was selected randomly from the Aimsweb (Edformation, 2002)

assessment system and was presented to the student. Aimsweb passages were untreated passages

that were used a single time. Following standard administration procedures, the student was

asked to orally read each passage for 1 minute while the tutor followed along and recorded

errors. The number of words read correctly within that minute (WRCM) was recorded as the data

to address the research questions. Additionally, the number of words read correctly was divided

Instructional Level 9

by the total number of words (read correctly plus errors) and multiplied by 100 to create an

accuracy score of percentage of words read correctly.

Previous research has consistently demonstrated high reliability for monitoring student

progress with CBM-R (Marston, 1989; Wayman, Wallace, Wiley, Ticha, & Espin, 2007), and

that slopes derived with CBM-R data can be acceptably reliable for instructional decision

making after 6 to 8 weeks if administration procedures are optimal (Christ, 2006). Moreover,

accuracy data derived with CBM-R procedures have been shown to be sufficiently reliable for

instructional decision making (Burns, Tucker, Frame, Foley, & Hauser, 2000).

Conditions

The students participated in two interventions, both of which were conducted for 20 min

at least 3 days per week. All three started with RR for a period of 5 to 18 weeks. Julie and Eric

started receiving the interventions during the first weeks of the school year, but Ryan began later

in the year because he moved into the school district after the school year had started. The RR

procedure involved having the student orally read grade-level passages for 1 minute while the

tutor followed along, marked errors, and provided error correction after the student completed

the reading sample. This process was repeated four additional times (total of five), and each time

the tutor marked how far the student read during the minute so that the student would attempt to

read further during the next read.

The phase change between interventions was staggered and occurred after 5 weeks

beginning in week 9. At that time, all three students received the SCP in which the student read a

grade-level passage orally while the tutor followed along and provided immediate correction for

any errors. After reading the passage, the student and tutor read the passage two more times by

reading every other word. The student read the first word during the first read through (tutor-

Instructional Level 10

second word, student-third word, tutor-fourth word, etc.), and the tutor read the first word during

the third reading (student-second word, tutor-third word, student-fourth word, etc.).

Procedure

As stated above, two of the students began the intervention early within the school year

and the third started later because he moved into the district at that time. All three students

received RR as the first intervention. Julie received RR for 9 weeks, and then changed to SCP;

Ryan started the intervention when Julie changed phases, and received RR for 5 weeks. Eric

continued in RR for an additional 5 weeks after both Julie and Ryan changed to SCP, then he

changed phases as well. All three students continued to receive SCP until they accurately read at

least 93% of the words for five consecutive data points. At that time all three changed back to

RR. Julie required 10 weeks to reach that criterion, Ryan 9 weeks, and Eric 5 weeks. The

criterion of 93% was selected based on the lowest end of the instructional level of 93% to 97%

(Armstrong & Gickling, 1978; Gravois & Gickling, 2008). Five consecutive data points was

selected to assure that enough data were collected within the phase. All three students then

received RR until the school year ended in May.

Treatment Integrity and Interobserver Agreement

The tutor was observed twice each month by the first author with an implementation

checklist. Thus, a total of 10% of the intervention sessions were observed. The building principal

also observed the tutor twice each month to provide feedback on implementation integrity, but

those data were not collected as part of this study. The integrity checklists contained between 16

(SCP) and 20 (RR) items that asked whether standardized procedural steps for each intervention

were completed accurately. The total number of items correctly completed was divided by the

Instructional Level 11

total number of items and resulted in mean integrity of 98.21% (range 93.75% to 100%) for SCP

and 99.2% (range 95% to 100%) for RR.

Interobserver agreement (IOA) was also assessed for 10% of the sessions by observing

the assessment procedures and recording words as read correctly or incorrectly. The total number

of agreements (consistently rated as correct or incorrect) was divided by the total number of

words and multiplied by 100, which resulted in 100% IOA.

Experimental Design

The research question was addressed with a nonconcurrent multiple baseline design using

the CBM-R data as the dependent variable. The design was considered nonconcurrent because of

the different start dates for the three participants, but all of the data were collected in a

concurrent manner in that all three students participated simultaneously.

Results

As shown in Figure 1, the trend of the CBM-R data did not indicate an effective

intervention in the first RR phase. The numeric slope of the CBM-R data was computed with

ordinary least squares, which are reported for each participant in Table 1. The slopes for the first

phase of RR were all less than 1.0 WRCM increase per week and one was negative. The trend of

the accuracy data was also flat for two students, but increased somewhat for Ryan. The change to

SCP resulted in a slight increase in level of the accuracy data for the students, but the trend was

mostly stable with an increase for Ryan. The CBM-R data were mostly stable within the SCP

and consistent with the first RR phase, with a slight increase for one student (Ryan).

After consistently reading at least 93% of the words correctly within the SCP phase (i.e.,

five consecutive accuracy scores above 93%), the students again received RR. The percent of the

words read correctly continued to be high for the students, with mean scores at or above 93%

Instructional Level 12

(92.98 for Ryan). A visual analysis of the CBM-R data for the second RR phase as compared to

the SCP phase revealed a relatively rapid change in level and an increase in trend. The slopes for

the second RR phase for all three students were higher than the first RR phase and two were

above 4.00 words correct per minute increase per week.

Discussion

The current study examined the use of the instructional level as a criterion to target

acquisition and proficiency interventions for struggling readers. Results indicated that reading

fluency improved faster after accuracy levels were facilitated to within the instructional level.

This provides evidence supporting the application of the instructional hierarchy (Haring &

Eaton, 1978) to targeting reading interventions that are either accuracy- or speed-focused. The

first RR condition for each student resulted in low to negative growth in reading fluency, but the

mean accuracy was below the instructional level criteria of 93% for each student, which

suggested that the reading material was too challenging. After the SCP phase, in which high-

modeling facilitated accuracy levels to consistently within the instructional level range, the

second RR phase resulted in much stronger growth slopes. Thus the hierarchy of skill

development (Haring & Eaton), in which accuracy precedes proficiency, was supported by the

current findings. Once students could read grade-level passages with sufficient accuracy, they

made strong gains in a high-practice (i.e., RR) intervention; however, high practice without

accuracy led to minimal improvements.

Although the change in level during the final phase was not immediate for two of the

students, it was for Julie and the data in the final phase did not overlap with the data in the SCP

phase by the 2nd

data point for Ryan and 4th

for Eric. Some latency could be expected given that

Instructional Level 13

meta-analytic research found a large mean effect for academic interventions that lasted 11 to 30

intervention sessions (Swanson, 1999).

These data also support the instructional level theory initially proposed by Betts (1946)

and operationally defined by Gickling and Armstrong (1978). Student skill increased at a faster

rate once they participated in an intervention that increased accuracy to at least 93% known

words. Although previous instructional level research also focused on time on task (Burns &

Dean, 2005; Treptow et al., 2007) and reading comprehension (Gickling & Armstrong, 1978;

Treptow et al., 2007), neither of which were examined in the current study, the current data were

consistent with studies that found increased reading fluency when students were taught at an

instructional level (Burns, 2002; 2007). However, the accuracy data were not the primary

dependent variable and were included in Figure 1 simply to graphically display the criterion for a

phase change. Thus, we make no experimental claims regarding the effectiveness of SCP in

increasing the accuracy with which passages are read.

The current results suggest that educators who seek to develop interventions that lead to

faster reading growth rates might be able to use the instructional level criterion of 93% to

identify interventions that either (a) target interventions at accuracy development if student

reading skills are below 93% accurate; or (b) target interventions at speed development if student

reading skills are above 93% accurate. This is consistent with previous research that found

reading accuracy data can be appropriate for instructional decision making (Burns, 2007; Burns

et al., 2000). However, previous research also found that preteaching activities can facilitate an

instructional level and result in positive outcomes (Beck, Burns, & Lau, 2010; Burns, 2007). The

SCP was not used as a preteaching activity in this study, and future researchers could examine if

multiple interventions for facilitating an instructional level also result in similar rates of learning

Instructional Level 14

when interventions change focus to proficiency. Moreover, instructional level researchers

caution against having students read material that represents a frustration level task (Gravois &

Gickling, 2008). Using the SCP might reduce potential frustration, but additional research is

needed to determine if more long-term reading growth occurs from high modeling and support

within grade-level material that represents a frustration level without assistance, or from repeated

reading with material that represents an instructional level.

The results of the current study suggest several directions for future research. First, the

current accuracy intervention (i.e., SCP) is one of several accuracy-focused (i.e., high modeling)

interventions. An entire class of assisted reading strategies focuses on modeling fluent reading

rather than giving repeated practice (Dowhower, 1989). For instance, the neurological impress

method (Heckelman, 1969) has led to successful high-modeling interventions in which the

student follows along as a tutor reads an entire passage (Hollingsworth, 1970; Mefferd &

Pettegrew, 1997). Future research might examine and compare results found for other types of

accuracy interventions that could facilitate an instructional level.

A second direction of future research might investigate the criteria for which accuracy is

deemed sufficiently high that fluency interventions can be implemented. The criterion of five

passages at an instructional level before switching to RR was chosen relatively arbitrarily. It

seems logical that tutors or teachers should have high confidence that the passages students are

reading fall consistently within the instructional level, but it may be that fewer points are

sufficient or that more points are necessary. Similarly, additional criteria for sufficient accuracy

could be investigated and compared. Although the empirically-validated instructional level

(Gickling & Armstrong, 1978; Treptow et al., 2007) appears appropriate because of the

associated positive fluency outcomes, other criteria for accuracy might be found. For instance,

Instructional Level 15

less conservative criteria (i.e., accuracy ≥ 90%) might indicate a student is ready for a speed-

targeted intervention sooner. Future research should investigate the number of passages at the

instructional level as well as the accuracy criteria to ensure that interventionists make effective,

efficient decisions.

Interpretations of the current results should not be made without consideration of the

relevant limitations. Although the slope data in the final condition are consistently much higher

for each participant, suggesting the appropriateness of following an effectively targeted high-

modeling procedure with a high-practice procedure (which is consistent with the instructional

hierarchy; Haring & Eaton, 1978), each participant appeared to make consistently positive gains,

both in terms of level and slope across conditions. In other words, each participant had higher

levels and slopes in each successive condition. Thus, there remains a possibility of maturation

effects (Cook & Campbell, 1979) instead of the effects from the conditions in any given phase

and that these data simply suggest that reading fluency interventions increase reading fluency.

However, the consistent findings across participants are encouraging and should be replicated to

include design features that control for maturation effects on reading development. One

possibility is to apply a group design that compares whether maturation alone (i.e., repeated

reading intervention for duration of the study) or the SCP intervention alone (i.e., adding the SCP

intervention without a return to repeated reading) would result in similar rates of growth at the

same time points in which the high growth rates were observed in the current study.

Another limitation is that the mean accuracy for two of the participants was not markedly

below the instructional level in the first intervention phase. Julie and Eric both had relatively

high accuracy levels during the initial phase (i.e., mean accuracy of 92.72% and 92.21%,

respectively), thus subsequent intervention effects might have been attributable to generally

Instructional Level 16

higher reading accuracy skills. The accuracy for each student was not high enough to reach the

low end of the empirically tested instructional level for reading of 93% (Gickling & Armstrong,

1985; Treptow et al., 2007), but additional participants with clearly low accuracy (i.e., Ryan)

would provide more convincing replication of the current findings.

Additional limitations are related to the first phase of the study, which did not include

baseline conditions and did not achieve consistent control in the length of the first intervention

phase. A true baseline condition for a similar length of time as the intervention duration would

permit comparisons between growth without intervention and growth in the context of each

intervention phase. Moreover, Ryan started intervention when Julia changed to SCP, which was

9 weeks after Julia had begun. Eric started at week 3, after Julia had two CBM-R data points.

Traditional multiple baseline designs have the same start points for each participant making it

easier to discern trends in growth or level during the first phase. While Eric appeared to have a

sufficient number of data points to identify trends in growth or level, Ryan is missing a

considerable number of data points. Thus, less confidence can be attributed to Ryan’s data in the

first intervention phase, although they appear mostly stable.

Although the criterion for switching to RR at 93% accuracy over five consecutive

sessions was purposefully consistent with the low end of the instructional level criteria (Gickling

& Armstrong, 1978; Treptow et al., 2007), the current study was not designed to identify

differential effects of students reading at the independent level (i.e., ≥ 97% accuracy). This was

only consistently observed for one student (Ryan), who had three of five data points in the

independent range, but the effects of achieving an independent level of accuracy in the high-

modeling (i.e., SCP) phase remain unknown. It is plausible that accuracy performance at the

independent level might affect outcomes of interventions targeted to fluency.

Instructional Level 17

The implications of the study are also limited by the small and relatively homogeneous

sample. Each student was a third grader from a single elementary school who struggled at

reading connected text. Thus, extensions of the study might target a more diverse range of

students. Moreover, the general idea of targeting interventions at building accuracy before

speed, as per the instructional hierarchy (Haring & Eaton, 1978), was only examined on a narrow

range of reading skills. The current participants all struggled with reading connected text;

however, the development of other reading skills might also be facilitated by following the

instructional hierarchy. For example, the development of phonetic skills might follow a similar

course, in which case interventions for accuracy might facilitate later effectiveness of fluency

interventions. Moreover, we did not collect true baseline data in that the students received an

intervention within each phase.

Inclusive of the limitations, this study adds support for the instructional hierarchy as an

intervention heuristic because students made much stronger gains in response to a fluency-

focused intervention after obtaining a high level of accuracy. Future research is recommended to

both strengthen the current conclusions and expand the student population and reading skills for

which the current methods may be applicable.

Instructional Level 18

References

Alber-Morgan, S. R., Ramp, E. M., Anderson, L. L., & Martin, C. M. (2007). Effects of repeated

readings, error correction, and performance feedback on the fluency and comprehension

of middle school students with problem behavior. The Journal of Special Education, 41,

17-30.

Ardoin, S. P., & Daly, E. J. (2007). Introduction to the special series: Close encounters of the

instructional kind—how the instructional hierarchy is shaping instructional research 30

years later. Journal of Behavioral Education, 16, 1-6.

Beck, M., Burns, M. K., & Lau, M. (2009). Preteaching unknown items as a behavioral

intervention for children with behavioral disorders. Behavior Disorders, 34, 91-99.

Berninger, V. W., Abbott, R. D., Vermeulen, K., & Fulton, C. M. (2006). Paths to reading

comprehension in at-risk second-grade readers. Journal of Learning Disabilities, 39, 334-

351.

Betts, E.A. (1946). Foundations of reading instruction, with emphasis on differentiated guidance.

New York: American Book Company.

Burns, M. K. (2004). Using curriculum-based assessment in the consultative process: A useful

innovation or an educational fad. Journal of Educational and Psychological Consultation,

15, 63-78.

Burns, M. K. (2007). Reading at the instructional level with children identified as learning

disabled: Potential implications for response-to-intervention. School Psychology

Quarterly, 22, 297-313.

Instructional Level 19

Burns, M. K., Codding, R. S., Boice, C. H., & Lukito, G. (2010). Meta-analysis of acquisition

and fluency math interventions with instructional and frustration level skills: Evidence

for a skill by treatment interaction. School Psychology Review, 39, 69-83.

Burns, M. K. & Dean, V. J. (2005). Effect of drill ratios on recall and on-task behavior for

children with learning and attention difficulties. Journal of Instructional Psychology, 32,

118-126.

Burns, M. K., Dean, V. J., & Foley, S. (2004). Preteaching unknown key words with

incremental rehearsal to improve reading fluency and comprehension with children

identified as reading disabled. Journal of School Psychology, 42, 303-314.

Burns, M. K., Tucker, J. A., Frame, J., Foley, S., & Hauser, A. (2000). Interscorer,

alternate-form, internal consistency, and test-retest reliability of Gickling’s model of

curriculum-based assessment for reading. Journal of Psychoeducational Assessment, 18,

353-360.

Burns, M. K., Tucker, J. A., Hauser, A., Thelen, R., Holmes, K., & White, K. (2002). Minimum

reading fluency rate necessary for comprehension: A potential criterion for curriculum-

based assessments. Assessment for Effective Intervention, 28, 1-7.

Burns, M. K., VanDerHeyden, A. M., & Boice, C. H. (2008). Best practices in delivery intensive

academic interventions. In A. Thomas & J. Grimes (Eds.) Best practices in school

psychology (5th

ed., pp. 1151-1162). Bethesda, MD: National Association of School

Psychologists

Burns, M. K., & Ysseldyke, J. E. (2009). Reported prevalence of evidence-based instructional

practices in special education. Journal of Special Education, 43, 3-11.

Instructional Level 20

Christ, T. J. (2006). Short-term estimates of growth using curriculum-based measurement of oral

reading fluency: Estimating standard error of the slope to construct confidence intervals.

School Psychology Review, 35, 128-133.

Cook, T. D. & Campbell, D. (1979). Quasi-Experimentation: Design and Analysis issues for

field settings. Chicago: Rand-McNally.

Daly III, E. J., Lentz Jr, F. E., & Boyer, J. (1996). The instructional hierarchy: A conceptual

model for understanding the effective components of reading interventions. School

Psychology Quarterly, 11(4), 369-386.

Daly, E. J., III, Martens, B. K., Dool, E. J., & Hintze, J. M. (1998). Using brief functional

analysis to select interventions for oral reading. Journal of Behavioral Education, 8, 203-

218.

Daly, E. J., III, Martens, B. K., Kilmer, A., & Massie, D. R. (1996). The effects of instructional

match and content overlap on generalized reading performance. Journal of Applied

Behavior Analysis, 29, 507-518.

Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional

Children, 52, 219-232.

Dochy, F., Segers, M., & Buehl, M. M. (1999). The relations between assessment practices and

outcomes of studies: The case of research on prior knowledge. Review of Educational

Research, 69, 145-186.

Dowhower, S. L. (1989). Repeated reading: Theory into practice. The Reading Teacher, 42, 502–

507.

Edformation (2002). AIMSweb progress monitoring and improvement system. Available from

http://www.aimsweb.com/

Instructional Level 21

Freeland, J. T., Skinner, C. H., Jackson, B., McDaniel, C. E., & Smith, S. (2000). Measuring and

increasing silent reading comprehension rates: Empirically validating a repeated readings

intervention. Psychology in the Schools, 37, 415-429.

Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an

indicator of reading competence: A theoretical, empirical, and historical analysis.

Scientific Studies of Reading, 5, 239 – 256.

Gickling, E. E., & Armstrong, D. L. (1978). Levels of instructional difficulty as related to on-

task behavior, task completion, and comprehension. Journal of Learning Disability, 11,

559-566.

Gickling, E., & Thompson, V. (1985). A personal view of curriculum-based assessment.

Exceptional Children, 52, 205-218.

Good, R. H. & Kaminski, R. A. (Eds). (2002). Dynamic indicators of basic early literacy skills

(6th

ed.). Eugene, OR: Institute for the Development of Educational Achievement.

Gravois, T. A. & Gickling, E. (2008). Best practices in instructional assessment. In A. Thomas

& J. Grimes (Eds.) Best practices in school psychology, Vol. IV (pp. 503-518). Bethesda,

MD: National Association of School Psychologists.

Haring, N. G., & Eaton, M. D. (1978). Systematic instructional technology: An instructional

hierarchy. In N. G. Haring, T. C. Lovitt, M. D. Eaton, & C. L. Hansen (Eds.), The fourth

R: Research in the classroom. Columbus, OH: Merrill.

Heckelman, R. G. (1969). A neurological-impress method of remedial reading instruction.

Academic Therapy Quarterly, 4, 277–282.

Instructional Level 22

Hollingsworth, P. M. (1970). An experiment with the impress method of teaching reading. The

Reading Teacher, 24, 112–114.

Homan, S. P., Klesius, J. P., & Hite, C. (1993). Effects of repeated readings and nonrepetitive

strategies on students’ fluency and comprehension. Journal of Educational Research, 87,

94–99.

Jenkins, J. R., Fuchs, L. S., van de Broek, P., Espin, C., & Deno, S. L. (2003). Accuracy and

fluency in list and context reading of skilled and RD groups: Absolute and relative

performance levels. Learning Disabilities Research & Practice, 18, 237–245.

Jones, K. M., & Wickstrom, K. F. (2002). Done in sixty seconds: Further analysis of the brief

assessment model for academic problems. School Psychology Review, 31, 554-568.

Kuhn, M. R., & Stahl, S. A. (2003). Fluency: A review of developmental and remedial practices.

Journal of Educational Psychology, 95, 3-21.

Marston, D. B. (1989). A curriculum-based measurement approach to assessing

academic performance: What it is and why do it. In M. R. Shinn (Ed.), Curriculum-

based measurement: Assessing special children (pp. 18-78). New York: Guilford Press.

Mefferd, P. E., & Pettegrew, B. S. (1997). Fostering literacy acquisition of students with

developmental disabilities: Assisted reading with predictable trade books. Reading

Research and Instruction, 36, 177–190.

Moyer, S. B. (1982). Repeated reading. Journal of Learning Disabilities, 15, 619-623.

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the

scientific research literature on reading and its implications for reading instruction.

reports of the subgroups. Bethesda, MD: National Institute for Literacy.

Instructional Level 23

Northwest Evaluation Association (2003).Technical manual for the NWEA measures of

academic progress and achievement level tests. Lake Oswego, OR: Author.

Rashotte, C.A., & Torgesen, J.K. (1985). Repeated reading and reading fluency in learning

disabled children. Reading Research Quarterly, 20, 180–188.

Rasinski, T. V. (2003). The fluent reader: Oral reading strategies for building word recognition,

fluency, and comprehension. New York: Scholastic.

Samuels, S. J. (1979). The method of repeated reading. The Reading Teacher, 32, 403-408.

Shapiro, E. S. (2004). Academic skills problems: Direct assessment and intervention (3rd ed.).

New York, NY: Guilford Press.

Spear-Swerling, L. (2006). Children’s reading comprehension and oral reading fluency in easy

text. Reading and Writing, 19, 199-220.

Swanson, H. L. (1999). Reading research for students with LD: A meta-analysis of intervention

outcomes. Journal of Learning Disabilities, 32, 504-532.

Therrien, W. J. (2004). Fluency and comprehension gains as a result of repeated reading.

Remedial and Special Education, 25, 252-261.

Treptow, M. A., Burns, M. K., & McComas, J. J. (2007). Reading at the frustration,

instructional, and independent levels: The effects on student’s reading comprehension

and time on task. School Psychology Review, 36, 159-166.

Wayman, M. M., Wallace, T., Wiley, H. I., Ticha, R., & Espin, C. A. (2007). Literature synthesis

on curriculum-based measurement in reading. Journal of Special Education, 41, 85-120.

Instructional Level 24

Table 1

Mean Oral Reading Fluency (ORF) Score, Mean Accuracy Score, and Slope for Each Phase

First Repeated Supported Cloze Second Repeated

Reading Phase Procedure Phase Reading Phase

Mean

ORF

Mean

Accuracy

Slope

Mean

ORF

Mean

Accuracy

Slope

Mean

ORF

Mean

Accuracy

Slope

(SD) (SD) (95% Confidence

Interval)

(SD) (SD) (95% Confidence

Interval)

(SD) (SD) (95% Confidence

Interval)

Julie 72.22 92.72% -2.15

(-4.62 to .32)

81.1 95.05% 0.27

(-2.20 to 2.75)

99.54 98.12% 1.73

(1.10 to 2.37)

(9.59) (2.13%) (9.22) (3.25%) (1.73) (1.15%)

Ryan 29.00 78.88% 0.80

(-3.53 to 5.13)

42.89 92.98% 0.42

(-2.26 to 3.09)

72.00 95.02% 4.13

(1.19 to 7.08)

(3.94) (6.93%) (8.28) (3.88%) (14.48) (1.77%)

Eric 83.66 92.21% 0.68

-.40 to 1.76

94.6 97.92% 0.70

(-7.27 to 8.67)

104.71 98.84% 5.79

(2.95 to 8.62)

(11.45) (2.71%) (6.95) (1.72%) (13.59) (1.42)

Note. ORF = Oral reading fluency score from the curriculum-based measurement for reading.

Instructional Level 25

Figure Caption

Figure 1. Accuracy and fluency data across repeated reading and supported cloze procedure

phases for each participant

Instructional Level 26